00:00:00.001 Started by upstream project "autotest-per-patch" build number 126250 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.490 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.501 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.512 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.512 > git config core.sparsecheckout # timeout=10 00:00:05.522 > git read-tree -mu HEAD # timeout=10 00:00:05.537 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.558 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.559 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.660 [Pipeline] Start of Pipeline 00:00:05.671 [Pipeline] library 00:00:05.673 Loading library shm_lib@master 00:00:06.834 Library shm_lib@master is cached. Copying from home. 00:00:06.872 [Pipeline] node 00:00:06.933 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.936 [Pipeline] { 00:00:06.950 [Pipeline] catchError 00:00:06.953 [Pipeline] { 00:00:06.971 [Pipeline] wrap 00:00:06.985 [Pipeline] { 00:00:06.995 [Pipeline] stage 00:00:06.996 [Pipeline] { (Prologue) 00:00:07.171 [Pipeline] sh 00:00:07.452 + logger -p user.info -t JENKINS-CI 00:00:07.493 [Pipeline] echo 00:00:07.495 Node: GP8 00:00:07.505 [Pipeline] sh 00:00:07.803 [Pipeline] setCustomBuildProperty 00:00:07.821 [Pipeline] echo 00:00:07.823 Cleanup processes 00:00:07.831 [Pipeline] sh 00:00:08.117 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.117 2125836 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.132 [Pipeline] sh 00:00:08.456 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.456 ++ grep -v 'sudo pgrep' 00:00:08.456 ++ awk '{print $1}' 00:00:08.456 + sudo kill -9 00:00:08.456 + true 00:00:08.468 [Pipeline] cleanWs 00:00:08.476 [WS-CLEANUP] Deleting project workspace... 00:00:08.476 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.482 [WS-CLEANUP] done 00:00:08.485 [Pipeline] setCustomBuildProperty 00:00:08.499 [Pipeline] sh 00:00:08.776 + sudo git config --global --replace-all safe.directory '*' 00:00:08.863 [Pipeline] httpRequest 00:00:08.885 [Pipeline] echo 00:00:08.886 Sorcerer 10.211.164.101 is alive 00:00:08.894 [Pipeline] httpRequest 00:00:08.898 HttpMethod: GET 00:00:08.899 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.899 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.919 Response Code: HTTP/1.1 200 OK 00:00:08.920 Success: Status code 200 is in the accepted range: 200,404 00:00:08.920 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.784 [Pipeline] sh 00:00:12.069 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:12.087 [Pipeline] httpRequest 00:00:12.121 [Pipeline] echo 00:00:12.123 Sorcerer 10.211.164.101 is alive 00:00:12.133 [Pipeline] httpRequest 00:00:12.138 HttpMethod: GET 00:00:12.138 URL: http://10.211.164.101/packages/spdk_c1860effdc3ae835ee2cfbfa8bb08864c3128895.tar.gz 00:00:12.139 Sending request to url: http://10.211.164.101/packages/spdk_c1860effdc3ae835ee2cfbfa8bb08864c3128895.tar.gz 00:00:12.158 Response Code: HTTP/1.1 200 OK 00:00:12.158 Success: Status code 200 is in the accepted range: 200,404 00:00:12.159 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c1860effdc3ae835ee2cfbfa8bb08864c3128895.tar.gz 00:00:46.658 [Pipeline] sh 00:00:46.942 + tar --no-same-owner -xf spdk_c1860effdc3ae835ee2cfbfa8bb08864c3128895.tar.gz 00:00:50.242 [Pipeline] sh 00:00:50.521 + git -C spdk log --oneline -n5 00:00:50.521 c1860effd nvme: populate socket_id for tcp controllers 00:00:50.521 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:50.521 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:50.521 b26ca8289 event: add enforce_numa app option 00:00:50.521 83c8cffdc env: add enforce_numa environment option 00:00:50.531 [Pipeline] } 00:00:50.543 [Pipeline] // stage 00:00:50.551 [Pipeline] stage 00:00:50.552 [Pipeline] { (Prepare) 00:00:50.567 [Pipeline] writeFile 00:00:50.580 [Pipeline] sh 00:00:50.859 + logger -p user.info -t JENKINS-CI 00:00:50.871 [Pipeline] sh 00:00:51.151 + logger -p user.info -t JENKINS-CI 00:00:51.161 [Pipeline] sh 00:00:51.437 + cat autorun-spdk.conf 00:00:51.437 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.437 SPDK_TEST_NVMF=1 00:00:51.437 SPDK_TEST_NVME_CLI=1 00:00:51.437 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.437 SPDK_TEST_NVMF_NICS=e810 00:00:51.437 SPDK_TEST_VFIOUSER=1 00:00:51.437 SPDK_RUN_UBSAN=1 00:00:51.437 NET_TYPE=phy 00:00:51.443 RUN_NIGHTLY=0 00:00:51.448 [Pipeline] readFile 00:00:51.473 [Pipeline] withEnv 00:00:51.475 [Pipeline] { 00:00:51.487 [Pipeline] sh 00:00:51.766 + set -ex 00:00:51.766 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:51.766 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.766 ++ SPDK_TEST_NVMF=1 00:00:51.766 ++ SPDK_TEST_NVME_CLI=1 00:00:51.766 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.766 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.766 ++ SPDK_TEST_VFIOUSER=1 00:00:51.766 ++ SPDK_RUN_UBSAN=1 00:00:51.766 ++ NET_TYPE=phy 00:00:51.766 ++ RUN_NIGHTLY=0 00:00:51.766 + case $SPDK_TEST_NVMF_NICS in 00:00:51.766 + DRIVERS=ice 00:00:51.766 + [[ tcp == \r\d\m\a ]] 00:00:51.766 + [[ -n ice ]] 00:00:51.766 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:51.766 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:51.766 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:51.766 rmmod: ERROR: Module irdma is not currently loaded 00:00:51.766 rmmod: ERROR: Module i40iw is not currently loaded 00:00:51.766 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:51.766 + true 00:00:51.766 + for D in $DRIVERS 00:00:51.766 + sudo modprobe ice 00:00:51.766 + exit 0 00:00:51.776 [Pipeline] } 00:00:51.817 [Pipeline] // withEnv 00:00:51.824 [Pipeline] } 00:00:51.841 [Pipeline] // stage 00:00:51.851 [Pipeline] catchError 00:00:51.852 [Pipeline] { 00:00:51.868 [Pipeline] timeout 00:00:51.869 Timeout set to expire in 50 min 00:00:51.870 [Pipeline] { 00:00:51.886 [Pipeline] stage 00:00:51.888 [Pipeline] { (Tests) 00:00:51.903 [Pipeline] sh 00:00:52.184 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.184 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.184 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.184 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.184 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.184 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.184 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.184 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:52.184 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.184 + source /etc/os-release 00:00:52.184 ++ NAME='Fedora Linux' 00:00:52.184 ++ VERSION='38 (Cloud Edition)' 00:00:52.184 ++ ID=fedora 00:00:52.184 ++ VERSION_ID=38 00:00:52.184 ++ VERSION_CODENAME= 00:00:52.184 ++ PLATFORM_ID=platform:f38 00:00:52.184 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:52.184 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.184 ++ LOGO=fedora-logo-icon 00:00:52.184 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:52.184 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.184 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:52.184 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.184 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.184 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.184 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:52.184 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.184 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:52.185 ++ SUPPORT_END=2024-05-14 00:00:52.185 ++ VARIANT='Cloud Edition' 00:00:52.185 ++ VARIANT_ID=cloud 00:00:52.185 + uname -a 00:00:52.185 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:52.185 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:53.118 Hugepages 00:00:53.118 node hugesize free / total 00:00:53.118 node0 1048576kB 0 / 0 00:00:53.118 node0 2048kB 0 / 0 00:00:53.118 node1 1048576kB 0 / 0 00:00:53.118 node1 2048kB 0 / 0 00:00:53.118 00:00:53.118 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:53.118 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:53.118 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:53.118 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:53.118 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:53.119 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:53.119 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:53.119 + rm -f /tmp/spdk-ld-path 00:00:53.119 + source autorun-spdk.conf 00:00:53.119 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.119 ++ SPDK_TEST_NVMF=1 00:00:53.119 ++ SPDK_TEST_NVME_CLI=1 00:00:53.119 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.119 ++ SPDK_TEST_NVMF_NICS=e810 00:00:53.119 ++ SPDK_TEST_VFIOUSER=1 00:00:53.119 ++ SPDK_RUN_UBSAN=1 00:00:53.119 ++ NET_TYPE=phy 00:00:53.119 ++ RUN_NIGHTLY=0 00:00:53.119 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:53.119 + [[ -n '' ]] 00:00:53.119 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.377 + for M in /var/spdk/build-*-manifest.txt 00:00:53.377 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:53.377 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.377 + for M in /var/spdk/build-*-manifest.txt 00:00:53.377 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:53.378 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.378 ++ uname 00:00:53.378 + [[ Linux == \L\i\n\u\x ]] 00:00:53.378 + sudo dmesg -T 00:00:53.378 + sudo dmesg --clear 00:00:53.378 + dmesg_pid=2126533 00:00:53.378 + [[ Fedora Linux == FreeBSD ]] 00:00:53.378 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.378 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.378 + sudo dmesg -Tw 00:00:53.378 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:53.378 + [[ -x /usr/src/fio-static/fio ]] 00:00:53.378 + export FIO_BIN=/usr/src/fio-static/fio 00:00:53.378 + FIO_BIN=/usr/src/fio-static/fio 00:00:53.378 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:53.378 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:53.378 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:53.378 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.378 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.378 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:53.378 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.378 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.378 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:53.378 Test configuration: 00:00:53.378 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.378 SPDK_TEST_NVMF=1 00:00:53.378 SPDK_TEST_NVME_CLI=1 00:00:53.378 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.378 SPDK_TEST_NVMF_NICS=e810 00:00:53.378 SPDK_TEST_VFIOUSER=1 00:00:53.378 SPDK_RUN_UBSAN=1 00:00:53.378 NET_TYPE=phy 00:00:53.378 RUN_NIGHTLY=0 23:04:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:53.378 23:04:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:53.378 23:04:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:53.378 23:04:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:53.378 23:04:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.378 23:04:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.378 23:04:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.378 23:04:08 -- paths/export.sh@5 -- $ export PATH 00:00:53.378 23:04:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.378 23:04:08 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:53.378 23:04:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:53.378 23:04:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721077448.XXXXXX 00:00:53.378 23:04:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721077448.gyhcTI 00:00:53.378 23:04:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:53.378 23:04:08 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:53.378 23:04:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:53.378 23:04:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:53.378 23:04:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:53.378 23:04:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:53.378 23:04:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:53.378 23:04:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.378 23:04:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:53.378 23:04:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:53.378 23:04:08 -- pm/common@17 -- $ local monitor 00:00:53.378 23:04:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:53.378 23:04:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:53.378 23:04:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:53.378 23:04:08 -- pm/common@21 -- $ date +%s 00:00:53.378 23:04:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:53.378 23:04:08 -- pm/common@21 -- $ date +%s 00:00:53.378 23:04:08 -- pm/common@25 -- $ sleep 1 00:00:53.378 23:04:08 -- pm/common@21 -- $ date +%s 00:00:53.378 23:04:08 -- pm/common@21 -- $ date +%s 00:00:53.378 23:04:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721077448 00:00:53.378 23:04:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721077448 00:00:53.378 23:04:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721077448 00:00:53.378 23:04:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721077448 00:00:53.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721077448_collect-vmstat.pm.log 00:00:53.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721077448_collect-cpu-load.pm.log 00:00:53.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721077448_collect-cpu-temp.pm.log 00:00:53.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721077448_collect-bmc-pm.bmc.pm.log 00:00:54.312 23:04:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:54.312 23:04:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:54.312 23:04:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:54.312 23:04:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.312 23:04:09 -- spdk/autobuild.sh@16 -- $ date -u 00:00:54.312 Mon Jul 15 09:04:09 PM UTC 2024 00:00:54.312 23:04:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:54.312 v24.09-pre-232-gc1860effd 00:00:54.312 23:04:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:54.312 23:04:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:54.313 23:04:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:54.313 23:04:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:54.313 23:04:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:54.313 23:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.313 ************************************ 00:00:54.313 START TEST ubsan 00:00:54.313 ************************************ 00:00:54.313 23:04:09 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:54.313 using ubsan 00:00:54.313 00:00:54.313 real 0m0.000s 00:00:54.313 user 0m0.000s 00:00:54.313 sys 0m0.000s 00:00:54.313 23:04:09 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:54.313 23:04:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:54.313 ************************************ 00:00:54.313 END TEST ubsan 00:00:54.313 ************************************ 00:00:54.571 23:04:09 -- common/autotest_common.sh@1142 -- $ return 0 00:00:54.571 23:04:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:54.571 23:04:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:54.571 23:04:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:54.571 23:04:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:54.571 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:54.571 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:54.828 Using 'verbs' RDMA provider 00:01:05.372 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.392 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.392 Creating mk/config.mk...done. 00:01:15.392 Creating mk/cc.flags.mk...done. 00:01:15.392 Type 'make' to build. 00:01:15.392 23:04:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:15.392 23:04:29 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:15.392 23:04:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.392 23:04:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.392 ************************************ 00:01:15.392 START TEST make 00:01:15.392 ************************************ 00:01:15.392 23:04:29 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:15.392 make[1]: Nothing to be done for 'all'. 00:01:16.777 The Meson build system 00:01:16.777 Version: 1.3.1 00:01:16.777 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:16.778 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.778 Build type: native build 00:01:16.778 Project name: libvfio-user 00:01:16.778 Project version: 0.0.1 00:01:16.778 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.778 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.778 Host machine cpu family: x86_64 00:01:16.778 Host machine cpu: x86_64 00:01:16.778 Run-time dependency threads found: YES 00:01:16.778 Library dl found: YES 00:01:16.778 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.778 Run-time dependency json-c found: YES 0.17 00:01:16.778 Run-time dependency cmocka found: YES 1.1.7 00:01:16.778 Program pytest-3 found: NO 00:01:16.778 Program flake8 found: NO 00:01:16.778 Program misspell-fixer found: NO 00:01:16.778 Program restructuredtext-lint found: NO 00:01:16.778 Program valgrind found: YES (/usr/bin/valgrind) 00:01:16.778 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.778 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.778 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.778 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.778 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:16.778 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:16.778 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.778 Build targets in project: 8 00:01:16.778 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:16.778 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:16.778 00:01:16.778 libvfio-user 0.0.1 00:01:16.778 00:01:16.778 User defined options 00:01:16.778 buildtype : debug 00:01:16.778 default_library: shared 00:01:16.778 libdir : /usr/local/lib 00:01:16.778 00:01:16.778 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.354 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.354 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.354 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.619 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.619 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.619 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.619 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.619 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.619 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.619 [9/37] Compiling C object samples/null.p/null.c.o 00:01:17.619 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.619 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.619 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.619 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.619 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.619 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.619 [16/37] Compiling C object samples/server.p/server.c.o 00:01:17.619 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.619 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.619 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.619 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.619 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.619 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.619 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.619 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.619 [25/37] Compiling C object samples/client.p/client.c.o 00:01:17.880 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.880 [27/37] Linking target samples/client 00:01:17.880 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.880 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.880 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:18.146 [31/37] Linking target test/unit_tests 00:01:18.146 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:18.146 [33/37] Linking target samples/server 00:01:18.146 [34/37] Linking target samples/gpio-pci-idio-16 00:01:18.146 [35/37] Linking target samples/lspci 00:01:18.146 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:18.146 [37/37] Linking target samples/null 00:01:18.146 INFO: autodetecting backend as ninja 00:01:18.146 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.146 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.097 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:19.097 ninja: no work to do. 00:01:23.280 The Meson build system 00:01:23.280 Version: 1.3.1 00:01:23.280 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:23.280 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:23.280 Build type: native build 00:01:23.280 Program cat found: YES (/usr/bin/cat) 00:01:23.280 Project name: DPDK 00:01:23.280 Project version: 24.03.0 00:01:23.280 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.280 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.280 Host machine cpu family: x86_64 00:01:23.280 Host machine cpu: x86_64 00:01:23.280 Message: ## Building in Developer Mode ## 00:01:23.280 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.280 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:23.280 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.280 Program python3 found: YES (/usr/bin/python3) 00:01:23.280 Program cat found: YES (/usr/bin/cat) 00:01:23.280 Compiler for C supports arguments -march=native: YES 00:01:23.280 Checking for size of "void *" : 8 00:01:23.280 Checking for size of "void *" : 8 (cached) 00:01:23.280 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:23.280 Library m found: YES 00:01:23.280 Library numa found: YES 00:01:23.280 Has header "numaif.h" : YES 00:01:23.280 Library fdt found: NO 00:01:23.280 Library execinfo found: NO 00:01:23.280 Has header "execinfo.h" : YES 00:01:23.280 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.280 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.280 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.280 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.280 Run-time dependency openssl found: YES 3.0.9 00:01:23.280 Run-time dependency libpcap found: YES 1.10.4 00:01:23.280 Has header "pcap.h" with dependency libpcap: YES 00:01:23.280 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.280 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.280 Compiler for C supports arguments -Wformat: YES 00:01:23.280 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.280 Compiler for C supports arguments -Wformat-security: NO 00:01:23.280 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.280 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.280 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.280 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.280 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.280 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.280 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.280 Compiler for C supports arguments -Wundef: YES 00:01:23.280 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.280 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.280 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.280 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.280 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.280 Program objdump found: YES (/usr/bin/objdump) 00:01:23.280 Compiler for C supports arguments -mavx512f: YES 00:01:23.280 Checking if "AVX512 checking" compiles: YES 00:01:23.280 Fetching value of define "__SSE4_2__" : 1 00:01:23.280 Fetching value of define "__AES__" : 1 00:01:23.280 Fetching value of define "__AVX__" : 1 00:01:23.280 Fetching value of define "__AVX2__" : (undefined) 00:01:23.280 Fetching value of define "__AVX512BW__" : (undefined) 00:01:23.280 Fetching value of define "__AVX512CD__" : (undefined) 00:01:23.280 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:23.280 Fetching value of define "__AVX512F__" : (undefined) 00:01:23.280 Fetching value of define "__AVX512VL__" : (undefined) 00:01:23.280 Fetching value of define "__PCLMUL__" : 1 00:01:23.280 Fetching value of define "__RDRND__" : 1 00:01:23.280 Fetching value of define "__RDSEED__" : (undefined) 00:01:23.280 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:23.280 Fetching value of define "__znver1__" : (undefined) 00:01:23.280 Fetching value of define "__znver2__" : (undefined) 00:01:23.280 Fetching value of define "__znver3__" : (undefined) 00:01:23.280 Fetching value of define "__znver4__" : (undefined) 00:01:23.280 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.280 Message: lib/log: Defining dependency "log" 00:01:23.280 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.280 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.280 Checking for function "getentropy" : NO 00:01:23.280 Message: lib/eal: Defining dependency "eal" 00:01:23.280 Message: lib/ring: Defining dependency "ring" 00:01:23.280 Message: lib/rcu: Defining dependency "rcu" 00:01:23.280 Message: lib/mempool: Defining dependency "mempool" 00:01:23.280 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.280 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.280 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.280 Compiler for C supports arguments -mpclmul: YES 00:01:23.280 Compiler for C supports arguments -maes: YES 00:01:23.280 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.280 Compiler for C supports arguments -mavx512bw: YES 00:01:23.280 Compiler for C supports arguments -mavx512dq: YES 00:01:23.280 Compiler for C supports arguments -mavx512vl: YES 00:01:23.280 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.280 Compiler for C supports arguments -mavx2: YES 00:01:23.280 Compiler for C supports arguments -mavx: YES 00:01:23.280 Message: lib/net: Defining dependency "net" 00:01:23.280 Message: lib/meter: Defining dependency "meter" 00:01:23.280 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.280 Message: lib/pci: Defining dependency "pci" 00:01:23.280 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.280 Message: lib/hash: Defining dependency "hash" 00:01:23.280 Message: lib/timer: Defining dependency "timer" 00:01:23.280 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.280 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.280 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.280 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.280 Message: lib/power: Defining dependency "power" 00:01:23.280 Message: lib/reorder: Defining dependency "reorder" 00:01:23.280 Message: lib/security: Defining dependency "security" 00:01:23.280 Has header "linux/userfaultfd.h" : YES 00:01:23.280 Has header "linux/vduse.h" : YES 00:01:23.280 Message: lib/vhost: Defining dependency "vhost" 00:01:23.280 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.280 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.280 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.280 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.280 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:23.280 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:23.280 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:23.280 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:23.280 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:23.280 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:23.280 Program doxygen found: YES (/usr/bin/doxygen) 00:01:23.280 Configuring doxy-api-html.conf using configuration 00:01:23.280 Configuring doxy-api-man.conf using configuration 00:01:23.280 Program mandb found: YES (/usr/bin/mandb) 00:01:23.280 Program sphinx-build found: NO 00:01:23.280 Configuring rte_build_config.h using configuration 00:01:23.280 Message: 00:01:23.280 ================= 00:01:23.280 Applications Enabled 00:01:23.280 ================= 00:01:23.280 00:01:23.280 apps: 00:01:23.280 00:01:23.280 00:01:23.280 Message: 00:01:23.280 ================= 00:01:23.280 Libraries Enabled 00:01:23.280 ================= 00:01:23.280 00:01:23.280 libs: 00:01:23.280 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:23.280 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:23.280 cryptodev, dmadev, power, reorder, security, vhost, 00:01:23.280 00:01:23.280 Message: 00:01:23.280 =============== 00:01:23.280 Drivers Enabled 00:01:23.280 =============== 00:01:23.280 00:01:23.280 common: 00:01:23.280 00:01:23.280 bus: 00:01:23.280 pci, vdev, 00:01:23.280 mempool: 00:01:23.280 ring, 00:01:23.280 dma: 00:01:23.280 00:01:23.280 net: 00:01:23.280 00:01:23.280 crypto: 00:01:23.280 00:01:23.280 compress: 00:01:23.280 00:01:23.280 vdpa: 00:01:23.280 00:01:23.280 00:01:23.280 Message: 00:01:23.280 ================= 00:01:23.280 Content Skipped 00:01:23.280 ================= 00:01:23.280 00:01:23.280 apps: 00:01:23.280 dumpcap: explicitly disabled via build config 00:01:23.280 graph: explicitly disabled via build config 00:01:23.280 pdump: explicitly disabled via build config 00:01:23.280 proc-info: explicitly disabled via build config 00:01:23.280 test-acl: explicitly disabled via build config 00:01:23.280 test-bbdev: explicitly disabled via build config 00:01:23.280 test-cmdline: explicitly disabled via build config 00:01:23.280 test-compress-perf: explicitly disabled via build config 00:01:23.280 test-crypto-perf: explicitly disabled via build config 00:01:23.280 test-dma-perf: explicitly disabled via build config 00:01:23.280 test-eventdev: explicitly disabled via build config 00:01:23.280 test-fib: explicitly disabled via build config 00:01:23.280 test-flow-perf: explicitly disabled via build config 00:01:23.280 test-gpudev: explicitly disabled via build config 00:01:23.280 test-mldev: explicitly disabled via build config 00:01:23.280 test-pipeline: explicitly disabled via build config 00:01:23.280 test-pmd: explicitly disabled via build config 00:01:23.280 test-regex: explicitly disabled via build config 00:01:23.280 test-sad: explicitly disabled via build config 00:01:23.280 test-security-perf: explicitly disabled via build config 00:01:23.280 00:01:23.280 libs: 00:01:23.280 argparse: explicitly disabled via build config 00:01:23.280 metrics: explicitly disabled via build config 00:01:23.280 acl: explicitly disabled via build config 00:01:23.280 bbdev: explicitly disabled via build config 00:01:23.281 bitratestats: explicitly disabled via build config 00:01:23.281 bpf: explicitly disabled via build config 00:01:23.281 cfgfile: explicitly disabled via build config 00:01:23.281 distributor: explicitly disabled via build config 00:01:23.281 efd: explicitly disabled via build config 00:01:23.281 eventdev: explicitly disabled via build config 00:01:23.281 dispatcher: explicitly disabled via build config 00:01:23.281 gpudev: explicitly disabled via build config 00:01:23.281 gro: explicitly disabled via build config 00:01:23.281 gso: explicitly disabled via build config 00:01:23.281 ip_frag: explicitly disabled via build config 00:01:23.281 jobstats: explicitly disabled via build config 00:01:23.281 latencystats: explicitly disabled via build config 00:01:23.281 lpm: explicitly disabled via build config 00:01:23.281 member: explicitly disabled via build config 00:01:23.281 pcapng: explicitly disabled via build config 00:01:23.281 rawdev: explicitly disabled via build config 00:01:23.281 regexdev: explicitly disabled via build config 00:01:23.281 mldev: explicitly disabled via build config 00:01:23.281 rib: explicitly disabled via build config 00:01:23.281 sched: explicitly disabled via build config 00:01:23.281 stack: explicitly disabled via build config 00:01:23.281 ipsec: explicitly disabled via build config 00:01:23.281 pdcp: explicitly disabled via build config 00:01:23.281 fib: explicitly disabled via build config 00:01:23.281 port: explicitly disabled via build config 00:01:23.281 pdump: explicitly disabled via build config 00:01:23.281 table: explicitly disabled via build config 00:01:23.281 pipeline: explicitly disabled via build config 00:01:23.281 graph: explicitly disabled via build config 00:01:23.281 node: explicitly disabled via build config 00:01:23.281 00:01:23.281 drivers: 00:01:23.281 common/cpt: not in enabled drivers build config 00:01:23.281 common/dpaax: not in enabled drivers build config 00:01:23.281 common/iavf: not in enabled drivers build config 00:01:23.281 common/idpf: not in enabled drivers build config 00:01:23.281 common/ionic: not in enabled drivers build config 00:01:23.281 common/mvep: not in enabled drivers build config 00:01:23.281 common/octeontx: not in enabled drivers build config 00:01:23.281 bus/auxiliary: not in enabled drivers build config 00:01:23.281 bus/cdx: not in enabled drivers build config 00:01:23.281 bus/dpaa: not in enabled drivers build config 00:01:23.281 bus/fslmc: not in enabled drivers build config 00:01:23.281 bus/ifpga: not in enabled drivers build config 00:01:23.281 bus/platform: not in enabled drivers build config 00:01:23.281 bus/uacce: not in enabled drivers build config 00:01:23.281 bus/vmbus: not in enabled drivers build config 00:01:23.281 common/cnxk: not in enabled drivers build config 00:01:23.281 common/mlx5: not in enabled drivers build config 00:01:23.281 common/nfp: not in enabled drivers build config 00:01:23.281 common/nitrox: not in enabled drivers build config 00:01:23.281 common/qat: not in enabled drivers build config 00:01:23.281 common/sfc_efx: not in enabled drivers build config 00:01:23.281 mempool/bucket: not in enabled drivers build config 00:01:23.281 mempool/cnxk: not in enabled drivers build config 00:01:23.281 mempool/dpaa: not in enabled drivers build config 00:01:23.281 mempool/dpaa2: not in enabled drivers build config 00:01:23.281 mempool/octeontx: not in enabled drivers build config 00:01:23.281 mempool/stack: not in enabled drivers build config 00:01:23.281 dma/cnxk: not in enabled drivers build config 00:01:23.281 dma/dpaa: not in enabled drivers build config 00:01:23.281 dma/dpaa2: not in enabled drivers build config 00:01:23.281 dma/hisilicon: not in enabled drivers build config 00:01:23.281 dma/idxd: not in enabled drivers build config 00:01:23.281 dma/ioat: not in enabled drivers build config 00:01:23.281 dma/skeleton: not in enabled drivers build config 00:01:23.281 net/af_packet: not in enabled drivers build config 00:01:23.281 net/af_xdp: not in enabled drivers build config 00:01:23.281 net/ark: not in enabled drivers build config 00:01:23.281 net/atlantic: not in enabled drivers build config 00:01:23.281 net/avp: not in enabled drivers build config 00:01:23.281 net/axgbe: not in enabled drivers build config 00:01:23.281 net/bnx2x: not in enabled drivers build config 00:01:23.281 net/bnxt: not in enabled drivers build config 00:01:23.281 net/bonding: not in enabled drivers build config 00:01:23.281 net/cnxk: not in enabled drivers build config 00:01:23.281 net/cpfl: not in enabled drivers build config 00:01:23.281 net/cxgbe: not in enabled drivers build config 00:01:23.281 net/dpaa: not in enabled drivers build config 00:01:23.281 net/dpaa2: not in enabled drivers build config 00:01:23.281 net/e1000: not in enabled drivers build config 00:01:23.281 net/ena: not in enabled drivers build config 00:01:23.281 net/enetc: not in enabled drivers build config 00:01:23.281 net/enetfec: not in enabled drivers build config 00:01:23.281 net/enic: not in enabled drivers build config 00:01:23.281 net/failsafe: not in enabled drivers build config 00:01:23.281 net/fm10k: not in enabled drivers build config 00:01:23.281 net/gve: not in enabled drivers build config 00:01:23.281 net/hinic: not in enabled drivers build config 00:01:23.281 net/hns3: not in enabled drivers build config 00:01:23.281 net/i40e: not in enabled drivers build config 00:01:23.281 net/iavf: not in enabled drivers build config 00:01:23.281 net/ice: not in enabled drivers build config 00:01:23.281 net/idpf: not in enabled drivers build config 00:01:23.281 net/igc: not in enabled drivers build config 00:01:23.281 net/ionic: not in enabled drivers build config 00:01:23.281 net/ipn3ke: not in enabled drivers build config 00:01:23.281 net/ixgbe: not in enabled drivers build config 00:01:23.281 net/mana: not in enabled drivers build config 00:01:23.281 net/memif: not in enabled drivers build config 00:01:23.281 net/mlx4: not in enabled drivers build config 00:01:23.281 net/mlx5: not in enabled drivers build config 00:01:23.281 net/mvneta: not in enabled drivers build config 00:01:23.281 net/mvpp2: not in enabled drivers build config 00:01:23.281 net/netvsc: not in enabled drivers build config 00:01:23.281 net/nfb: not in enabled drivers build config 00:01:23.281 net/nfp: not in enabled drivers build config 00:01:23.281 net/ngbe: not in enabled drivers build config 00:01:23.281 net/null: not in enabled drivers build config 00:01:23.281 net/octeontx: not in enabled drivers build config 00:01:23.281 net/octeon_ep: not in enabled drivers build config 00:01:23.281 net/pcap: not in enabled drivers build config 00:01:23.281 net/pfe: not in enabled drivers build config 00:01:23.281 net/qede: not in enabled drivers build config 00:01:23.281 net/ring: not in enabled drivers build config 00:01:23.281 net/sfc: not in enabled drivers build config 00:01:23.281 net/softnic: not in enabled drivers build config 00:01:23.281 net/tap: not in enabled drivers build config 00:01:23.281 net/thunderx: not in enabled drivers build config 00:01:23.281 net/txgbe: not in enabled drivers build config 00:01:23.281 net/vdev_netvsc: not in enabled drivers build config 00:01:23.281 net/vhost: not in enabled drivers build config 00:01:23.281 net/virtio: not in enabled drivers build config 00:01:23.281 net/vmxnet3: not in enabled drivers build config 00:01:23.281 raw/*: missing internal dependency, "rawdev" 00:01:23.281 crypto/armv8: not in enabled drivers build config 00:01:23.281 crypto/bcmfs: not in enabled drivers build config 00:01:23.281 crypto/caam_jr: not in enabled drivers build config 00:01:23.281 crypto/ccp: not in enabled drivers build config 00:01:23.281 crypto/cnxk: not in enabled drivers build config 00:01:23.281 crypto/dpaa_sec: not in enabled drivers build config 00:01:23.281 crypto/dpaa2_sec: not in enabled drivers build config 00:01:23.281 crypto/ipsec_mb: not in enabled drivers build config 00:01:23.281 crypto/mlx5: not in enabled drivers build config 00:01:23.281 crypto/mvsam: not in enabled drivers build config 00:01:23.281 crypto/nitrox: not in enabled drivers build config 00:01:23.281 crypto/null: not in enabled drivers build config 00:01:23.281 crypto/octeontx: not in enabled drivers build config 00:01:23.281 crypto/openssl: not in enabled drivers build config 00:01:23.281 crypto/scheduler: not in enabled drivers build config 00:01:23.281 crypto/uadk: not in enabled drivers build config 00:01:23.281 crypto/virtio: not in enabled drivers build config 00:01:23.281 compress/isal: not in enabled drivers build config 00:01:23.281 compress/mlx5: not in enabled drivers build config 00:01:23.281 compress/nitrox: not in enabled drivers build config 00:01:23.281 compress/octeontx: not in enabled drivers build config 00:01:23.281 compress/zlib: not in enabled drivers build config 00:01:23.281 regex/*: missing internal dependency, "regexdev" 00:01:23.281 ml/*: missing internal dependency, "mldev" 00:01:23.281 vdpa/ifc: not in enabled drivers build config 00:01:23.281 vdpa/mlx5: not in enabled drivers build config 00:01:23.281 vdpa/nfp: not in enabled drivers build config 00:01:23.281 vdpa/sfc: not in enabled drivers build config 00:01:23.281 event/*: missing internal dependency, "eventdev" 00:01:23.281 baseband/*: missing internal dependency, "bbdev" 00:01:23.281 gpu/*: missing internal dependency, "gpudev" 00:01:23.281 00:01:23.281 00:01:23.846 Build targets in project: 85 00:01:23.846 00:01:23.846 DPDK 24.03.0 00:01:23.846 00:01:23.846 User defined options 00:01:23.846 buildtype : debug 00:01:23.846 default_library : shared 00:01:23.846 libdir : lib 00:01:23.847 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:23.847 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:23.847 c_link_args : 00:01:23.847 cpu_instruction_set: native 00:01:23.847 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:23.847 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:23.847 enable_docs : false 00:01:23.847 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:23.847 enable_kmods : false 00:01:23.847 max_lcores : 128 00:01:23.847 tests : false 00:01:23.847 00:01:23.847 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.109 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.369 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.369 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.369 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.369 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.369 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.369 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.369 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.369 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.369 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.369 [10/268] Linking static target lib/librte_kvargs.a 00:01:24.369 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.369 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.369 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.369 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.369 [15/268] Linking static target lib/librte_log.a 00:01:24.369 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.939 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.207 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:25.207 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:25.207 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.207 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:25.207 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.207 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.207 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.207 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:25.207 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.207 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.207 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.207 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.207 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.207 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.207 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.207 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.207 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.207 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.207 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.207 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.207 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.207 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.207 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.207 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.207 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.207 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:25.207 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.207 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:25.207 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.207 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:25.207 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:25.207 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.207 [50/268] Linking static target lib/librte_telemetry.a 00:01:25.207 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.468 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.468 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.468 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.468 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.468 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.468 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.468 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.468 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.468 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:25.468 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:25.468 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.468 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.727 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.727 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.727 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.727 [67/268] Linking target lib/librte_log.so.24.1 00:01:25.727 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:25.727 [69/268] Linking static target lib/librte_pci.a 00:01:25.993 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.993 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.993 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.994 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:25.994 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:25.994 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:25.994 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:25.994 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.994 [78/268] Linking target lib/librte_kvargs.so.24.1 00:01:25.994 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:25.994 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.994 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:26.252 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.252 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:26.252 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.252 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.252 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:26.252 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:26.252 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.252 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:26.252 [90/268] Linking static target lib/librte_ring.a 00:01:26.252 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.252 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.252 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:26.252 [94/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:26.252 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.252 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:26.252 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:26.252 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:26.252 [99/268] Linking static target lib/librte_meter.a 00:01:26.252 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:26.252 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:26.252 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:26.252 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:26.252 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:26.252 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.252 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.252 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:26.512 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:26.512 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.512 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:26.512 [111/268] Linking static target lib/librte_rcu.a 00:01:26.512 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:26.512 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:26.512 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:26.512 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:26.512 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:26.512 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:26.512 [118/268] Linking static target lib/librte_mempool.a 00:01:26.512 [119/268] Linking static target lib/librte_eal.a 00:01:26.512 [120/268] Linking target lib/librte_telemetry.so.24.1 00:01:26.512 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:26.512 [122/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:26.512 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:26.512 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:26.512 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:26.772 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:26.772 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:26.772 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:26.772 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:26.772 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:26.772 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:26.772 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:26.772 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:26.772 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:26.772 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:26.772 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.031 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.031 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.031 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:27.031 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.031 [141/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.031 [142/268] Linking static target lib/librte_net.a 00:01:27.031 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.031 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:27.031 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:27.031 [146/268] Linking static target lib/librte_cmdline.a 00:01:27.032 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.290 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:27.290 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:27.290 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.290 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.290 [152/268] Linking static target lib/librte_timer.a 00:01:27.290 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:27.290 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.290 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:27.290 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:27.290 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.570 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.570 [159/268] Linking static target lib/librte_dmadev.a 00:01:27.570 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.570 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.570 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.570 [163/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.570 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.570 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.570 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.570 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.570 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.570 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.570 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.570 [171/268] Linking static target lib/librte_compressdev.a 00:01:27.570 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.827 [173/268] Linking static target lib/librte_power.a 00:01:27.827 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.827 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.827 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.827 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.827 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.827 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.827 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.827 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.827 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.827 [183/268] Linking static target lib/librte_hash.a 00:01:27.827 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.827 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.827 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.083 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:28.083 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:28.083 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:28.083 [190/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.083 [191/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:28.083 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:28.083 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:28.083 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:28.083 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:28.083 [196/268] Linking static target lib/librte_mbuf.a 00:01:28.083 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.083 [198/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:28.083 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:28.084 [200/268] Linking static target lib/librte_reorder.a 00:01:28.084 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:28.084 [202/268] Linking static target lib/librte_security.a 00:01:28.340 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:28.340 [204/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.340 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.340 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.340 [207/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:28.340 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:28.340 [209/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.340 [210/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.340 [211/268] Linking static target drivers/librte_bus_vdev.a 00:01:28.340 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:28.340 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.340 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.340 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:28.340 [216/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.340 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:28.340 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.340 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:28.596 [220/268] Linking static target lib/librte_ethdev.a 00:01:28.596 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.597 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.597 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.597 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:28.597 [225/268] Linking static target lib/librte_cryptodev.a 00:01:28.597 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.966 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.898 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.798 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.798 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.798 [231/268] Linking target lib/librte_eal.so.24.1 00:01:33.056 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:33.056 [233/268] Linking target lib/librte_ring.so.24.1 00:01:33.056 [234/268] Linking target lib/librte_timer.so.24.1 00:01:33.056 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:33.056 [236/268] Linking target lib/librte_pci.so.24.1 00:01:33.056 [237/268] Linking target lib/librte_meter.so.24.1 00:01:33.056 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:33.056 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:33.056 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:33.056 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:33.056 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:33.056 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:33.315 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:33.315 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:33.315 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:33.315 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:33.315 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:33.315 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:33.315 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:33.573 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:33.573 [252/268] Linking target lib/librte_compressdev.so.24.1 00:01:33.573 [253/268] Linking target lib/librte_net.so.24.1 00:01:33.573 [254/268] Linking target lib/librte_reorder.so.24.1 00:01:33.573 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:33.573 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:33.573 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:33.573 [258/268] Linking target lib/librte_hash.so.24.1 00:01:33.573 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:33.831 [260/268] Linking target lib/librte_security.so.24.1 00:01:33.831 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:33.831 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:33.831 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:33.831 [264/268] Linking target lib/librte_power.so.24.1 00:01:36.361 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:36.361 [266/268] Linking static target lib/librte_vhost.a 00:01:37.295 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.295 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:37.295 INFO: autodetecting backend as ninja 00:01:37.295 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:38.227 CC lib/log/log.o 00:01:38.227 CC lib/log/log_flags.o 00:01:38.227 CC lib/log/log_deprecated.o 00:01:38.227 CC lib/ut_mock/mock.o 00:01:38.227 CC lib/ut/ut.o 00:01:38.227 LIB libspdk_ut_mock.a 00:01:38.227 LIB libspdk_log.a 00:01:38.227 LIB libspdk_ut.a 00:01:38.485 SO libspdk_ut_mock.so.6.0 00:01:38.485 SO libspdk_log.so.7.0 00:01:38.485 SO libspdk_ut.so.2.0 00:01:38.485 SYMLINK libspdk_ut_mock.so 00:01:38.485 SYMLINK libspdk_ut.so 00:01:38.485 SYMLINK libspdk_log.so 00:01:38.485 CXX lib/trace_parser/trace.o 00:01:38.485 CC lib/ioat/ioat.o 00:01:38.485 CC lib/dma/dma.o 00:01:38.485 CC lib/util/base64.o 00:01:38.485 CC lib/util/bit_array.o 00:01:38.485 CC lib/util/cpuset.o 00:01:38.485 CC lib/util/crc16.o 00:01:38.485 CC lib/util/crc32.o 00:01:38.485 CC lib/util/crc32c.o 00:01:38.485 CC lib/util/crc32_ieee.o 00:01:38.485 CC lib/util/crc64.o 00:01:38.485 CC lib/util/dif.o 00:01:38.485 CC lib/util/fd.o 00:01:38.485 CC lib/util/fd_group.o 00:01:38.485 CC lib/util/file.o 00:01:38.485 CC lib/util/hexlify.o 00:01:38.485 CC lib/util/iov.o 00:01:38.485 CC lib/util/math.o 00:01:38.485 CC lib/util/net.o 00:01:38.485 CC lib/util/pipe.o 00:01:38.485 CC lib/util/strerror_tls.o 00:01:38.485 CC lib/util/string.o 00:01:38.485 CC lib/util/uuid.o 00:01:38.485 CC lib/util/xor.o 00:01:38.485 CC lib/util/zipf.o 00:01:38.743 CC lib/vfio_user/host/vfio_user_pci.o 00:01:38.743 CC lib/vfio_user/host/vfio_user.o 00:01:38.743 LIB libspdk_dma.a 00:01:38.743 SO libspdk_dma.so.4.0 00:01:39.001 SYMLINK libspdk_dma.so 00:01:39.001 LIB libspdk_ioat.a 00:01:39.001 SO libspdk_ioat.so.7.0 00:01:39.001 SYMLINK libspdk_ioat.so 00:01:39.001 LIB libspdk_vfio_user.a 00:01:39.001 SO libspdk_vfio_user.so.5.0 00:01:39.001 SYMLINK libspdk_vfio_user.so 00:01:39.262 LIB libspdk_util.a 00:01:39.262 SO libspdk_util.so.9.1 00:01:39.262 SYMLINK libspdk_util.so 00:01:39.521 CC lib/rdma_provider/common.o 00:01:39.521 CC lib/json/json_parse.o 00:01:39.521 CC lib/idxd/idxd.o 00:01:39.521 CC lib/vmd/vmd.o 00:01:39.521 CC lib/conf/conf.o 00:01:39.521 CC lib/rdma_utils/rdma_utils.o 00:01:39.521 CC lib/env_dpdk/env.o 00:01:39.521 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:39.521 CC lib/json/json_util.o 00:01:39.521 CC lib/vmd/led.o 00:01:39.521 CC lib/idxd/idxd_user.o 00:01:39.521 CC lib/env_dpdk/memory.o 00:01:39.521 CC lib/json/json_write.o 00:01:39.521 CC lib/idxd/idxd_kernel.o 00:01:39.521 CC lib/env_dpdk/pci.o 00:01:39.521 CC lib/env_dpdk/init.o 00:01:39.521 CC lib/env_dpdk/threads.o 00:01:39.521 CC lib/env_dpdk/pci_ioat.o 00:01:39.521 CC lib/env_dpdk/pci_virtio.o 00:01:39.521 CC lib/env_dpdk/pci_vmd.o 00:01:39.521 CC lib/env_dpdk/pci_idxd.o 00:01:39.521 CC lib/env_dpdk/pci_event.o 00:01:39.521 CC lib/env_dpdk/sigbus_handler.o 00:01:39.521 CC lib/env_dpdk/pci_dpdk.o 00:01:39.521 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:39.521 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:39.521 LIB libspdk_trace_parser.a 00:01:39.521 SO libspdk_trace_parser.so.5.0 00:01:39.779 LIB libspdk_rdma_provider.a 00:01:39.779 SYMLINK libspdk_trace_parser.so 00:01:39.779 SO libspdk_rdma_provider.so.6.0 00:01:39.779 LIB libspdk_conf.a 00:01:39.779 SO libspdk_conf.so.6.0 00:01:39.779 SYMLINK libspdk_rdma_provider.so 00:01:39.779 LIB libspdk_json.a 00:01:39.779 SYMLINK libspdk_conf.so 00:01:39.779 SO libspdk_json.so.6.0 00:01:40.037 LIB libspdk_rdma_utils.a 00:01:40.037 SYMLINK libspdk_json.so 00:01:40.037 SO libspdk_rdma_utils.so.1.0 00:01:40.037 SYMLINK libspdk_rdma_utils.so 00:01:40.037 CC lib/jsonrpc/jsonrpc_server.o 00:01:40.037 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:40.037 CC lib/jsonrpc/jsonrpc_client.o 00:01:40.037 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:40.037 LIB libspdk_idxd.a 00:01:40.037 SO libspdk_idxd.so.12.0 00:01:40.296 SYMLINK libspdk_idxd.so 00:01:40.296 LIB libspdk_vmd.a 00:01:40.296 SO libspdk_vmd.so.6.0 00:01:40.296 SYMLINK libspdk_vmd.so 00:01:40.296 LIB libspdk_jsonrpc.a 00:01:40.296 SO libspdk_jsonrpc.so.6.0 00:01:40.554 SYMLINK libspdk_jsonrpc.so 00:01:40.554 CC lib/rpc/rpc.o 00:01:40.813 LIB libspdk_rpc.a 00:01:40.813 SO libspdk_rpc.so.6.0 00:01:40.813 SYMLINK libspdk_rpc.so 00:01:41.072 CC lib/trace/trace.o 00:01:41.072 CC lib/notify/notify.o 00:01:41.072 CC lib/keyring/keyring.o 00:01:41.072 CC lib/trace/trace_flags.o 00:01:41.072 CC lib/keyring/keyring_rpc.o 00:01:41.072 CC lib/notify/notify_rpc.o 00:01:41.072 CC lib/trace/trace_rpc.o 00:01:41.413 LIB libspdk_notify.a 00:01:41.413 SO libspdk_notify.so.6.0 00:01:41.413 LIB libspdk_keyring.a 00:01:41.413 SYMLINK libspdk_notify.so 00:01:41.413 LIB libspdk_trace.a 00:01:41.413 SO libspdk_keyring.so.1.0 00:01:41.413 SO libspdk_trace.so.10.0 00:01:41.413 SYMLINK libspdk_keyring.so 00:01:41.413 SYMLINK libspdk_trace.so 00:01:41.694 LIB libspdk_env_dpdk.a 00:01:41.694 SO libspdk_env_dpdk.so.15.0 00:01:41.694 CC lib/sock/sock.o 00:01:41.694 CC lib/sock/sock_rpc.o 00:01:41.694 CC lib/thread/thread.o 00:01:41.694 CC lib/thread/iobuf.o 00:01:41.694 SYMLINK libspdk_env_dpdk.so 00:01:41.952 LIB libspdk_sock.a 00:01:41.952 SO libspdk_sock.so.10.0 00:01:41.952 SYMLINK libspdk_sock.so 00:01:42.210 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:42.210 CC lib/nvme/nvme_ctrlr.o 00:01:42.210 CC lib/nvme/nvme_fabric.o 00:01:42.210 CC lib/nvme/nvme_ns_cmd.o 00:01:42.210 CC lib/nvme/nvme_ns.o 00:01:42.210 CC lib/nvme/nvme_pcie_common.o 00:01:42.210 CC lib/nvme/nvme_pcie.o 00:01:42.210 CC lib/nvme/nvme_qpair.o 00:01:42.210 CC lib/nvme/nvme.o 00:01:42.210 CC lib/nvme/nvme_quirks.o 00:01:42.210 CC lib/nvme/nvme_transport.o 00:01:42.210 CC lib/nvme/nvme_discovery.o 00:01:42.210 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:42.210 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:42.210 CC lib/nvme/nvme_tcp.o 00:01:42.210 CC lib/nvme/nvme_opal.o 00:01:42.210 CC lib/nvme/nvme_io_msg.o 00:01:42.210 CC lib/nvme/nvme_poll_group.o 00:01:42.210 CC lib/nvme/nvme_zns.o 00:01:42.210 CC lib/nvme/nvme_stubs.o 00:01:42.210 CC lib/nvme/nvme_auth.o 00:01:42.210 CC lib/nvme/nvme_cuse.o 00:01:42.210 CC lib/nvme/nvme_vfio_user.o 00:01:42.210 CC lib/nvme/nvme_rdma.o 00:01:43.145 LIB libspdk_thread.a 00:01:43.145 SO libspdk_thread.so.10.1 00:01:43.145 SYMLINK libspdk_thread.so 00:01:43.403 CC lib/blob/blobstore.o 00:01:43.403 CC lib/accel/accel.o 00:01:43.403 CC lib/vfu_tgt/tgt_endpoint.o 00:01:43.403 CC lib/blob/request.o 00:01:43.403 CC lib/vfu_tgt/tgt_rpc.o 00:01:43.403 CC lib/init/json_config.o 00:01:43.403 CC lib/accel/accel_rpc.o 00:01:43.403 CC lib/virtio/virtio.o 00:01:43.403 CC lib/blob/zeroes.o 00:01:43.403 CC lib/accel/accel_sw.o 00:01:43.403 CC lib/virtio/virtio_vhost_user.o 00:01:43.403 CC lib/init/subsystem.o 00:01:43.403 CC lib/blob/blob_bs_dev.o 00:01:43.403 CC lib/virtio/virtio_vfio_user.o 00:01:43.403 CC lib/init/subsystem_rpc.o 00:01:43.403 CC lib/init/rpc.o 00:01:43.403 CC lib/virtio/virtio_pci.o 00:01:43.663 LIB libspdk_init.a 00:01:43.663 SO libspdk_init.so.5.0 00:01:43.663 LIB libspdk_virtio.a 00:01:43.663 LIB libspdk_vfu_tgt.a 00:01:43.920 SYMLINK libspdk_init.so 00:01:43.920 SO libspdk_vfu_tgt.so.3.0 00:01:43.920 SO libspdk_virtio.so.7.0 00:01:43.920 SYMLINK libspdk_vfu_tgt.so 00:01:43.920 SYMLINK libspdk_virtio.so 00:01:43.920 CC lib/event/app.o 00:01:43.920 CC lib/event/reactor.o 00:01:43.920 CC lib/event/log_rpc.o 00:01:43.920 CC lib/event/app_rpc.o 00:01:43.920 CC lib/event/scheduler_static.o 00:01:44.486 LIB libspdk_event.a 00:01:44.486 SO libspdk_event.so.14.0 00:01:44.486 LIB libspdk_accel.a 00:01:44.486 SYMLINK libspdk_event.so 00:01:44.486 SO libspdk_accel.so.15.1 00:01:44.486 SYMLINK libspdk_accel.so 00:01:44.744 LIB libspdk_nvme.a 00:01:44.744 CC lib/bdev/bdev.o 00:01:44.744 CC lib/bdev/bdev_rpc.o 00:01:44.744 CC lib/bdev/bdev_zone.o 00:01:44.744 CC lib/bdev/part.o 00:01:44.744 CC lib/bdev/scsi_nvme.o 00:01:44.744 SO libspdk_nvme.so.13.1 00:01:45.001 SYMLINK libspdk_nvme.so 00:01:46.376 LIB libspdk_blob.a 00:01:46.376 SO libspdk_blob.so.11.0 00:01:46.633 SYMLINK libspdk_blob.so 00:01:46.633 CC lib/blobfs/blobfs.o 00:01:46.633 CC lib/blobfs/tree.o 00:01:46.633 CC lib/lvol/lvol.o 00:01:47.198 LIB libspdk_bdev.a 00:01:47.198 SO libspdk_bdev.so.15.1 00:01:47.461 SYMLINK libspdk_bdev.so 00:01:47.461 LIB libspdk_blobfs.a 00:01:47.461 CC lib/nbd/nbd.o 00:01:47.461 CC lib/nbd/nbd_rpc.o 00:01:47.461 CC lib/scsi/dev.o 00:01:47.461 CC lib/ftl/ftl_core.o 00:01:47.461 CC lib/scsi/lun.o 00:01:47.462 CC lib/ftl/ftl_init.o 00:01:47.462 CC lib/ftl/ftl_layout.o 00:01:47.462 CC lib/scsi/port.o 00:01:47.462 CC lib/ublk/ublk.o 00:01:47.462 CC lib/nvmf/ctrlr.o 00:01:47.462 CC lib/ftl/ftl_debug.o 00:01:47.462 CC lib/scsi/scsi.o 00:01:47.462 CC lib/ublk/ublk_rpc.o 00:01:47.462 CC lib/nvmf/ctrlr_discovery.o 00:01:47.462 CC lib/scsi/scsi_bdev.o 00:01:47.462 CC lib/ftl/ftl_io.o 00:01:47.462 CC lib/nvmf/ctrlr_bdev.o 00:01:47.462 CC lib/scsi/scsi_pr.o 00:01:47.462 CC lib/ftl/ftl_sb.o 00:01:47.462 CC lib/nvmf/subsystem.o 00:01:47.462 CC lib/scsi/scsi_rpc.o 00:01:47.462 CC lib/scsi/task.o 00:01:47.462 CC lib/ftl/ftl_l2p.o 00:01:47.462 CC lib/nvmf/nvmf.o 00:01:47.462 CC lib/ftl/ftl_l2p_flat.o 00:01:47.462 CC lib/nvmf/nvmf_rpc.o 00:01:47.462 CC lib/ftl/ftl_nv_cache.o 00:01:47.462 CC lib/nvmf/transport.o 00:01:47.462 CC lib/ftl/ftl_band.o 00:01:47.462 CC lib/nvmf/tcp.o 00:01:47.462 CC lib/ftl/ftl_band_ops.o 00:01:47.462 CC lib/nvmf/stubs.o 00:01:47.462 CC lib/ftl/ftl_writer.o 00:01:47.462 CC lib/nvmf/mdns_server.o 00:01:47.462 CC lib/nvmf/vfio_user.o 00:01:47.462 CC lib/ftl/ftl_rq.o 00:01:47.462 CC lib/ftl/ftl_reloc.o 00:01:47.462 CC lib/nvmf/rdma.o 00:01:47.462 CC lib/ftl/ftl_l2p_cache.o 00:01:47.462 CC lib/nvmf/auth.o 00:01:47.462 CC lib/ftl/ftl_p2l.o 00:01:47.462 CC lib/ftl/mngt/ftl_mngt.o 00:01:47.462 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:47.462 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:47.462 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:47.462 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:47.462 SO libspdk_blobfs.so.10.0 00:01:47.724 SYMLINK libspdk_blobfs.so 00:01:47.724 LIB libspdk_lvol.a 00:01:47.724 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:47.724 SO libspdk_lvol.so.10.0 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:47.986 SYMLINK libspdk_lvol.so 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:47.986 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:47.987 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:47.987 CC lib/ftl/utils/ftl_conf.o 00:01:47.987 CC lib/ftl/utils/ftl_md.o 00:01:47.987 CC lib/ftl/utils/ftl_mempool.o 00:01:47.987 CC lib/ftl/utils/ftl_bitmap.o 00:01:47.987 CC lib/ftl/utils/ftl_property.o 00:01:47.987 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:47.987 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:47.987 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:47.987 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:47.987 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:47.987 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:47.987 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:48.244 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:48.244 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:48.244 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:48.244 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:48.244 CC lib/ftl/base/ftl_base_dev.o 00:01:48.244 CC lib/ftl/base/ftl_base_bdev.o 00:01:48.244 CC lib/ftl/ftl_trace.o 00:01:48.244 LIB libspdk_nbd.a 00:01:48.244 SO libspdk_nbd.so.7.0 00:01:48.501 SYMLINK libspdk_nbd.so 00:01:48.501 LIB libspdk_scsi.a 00:01:48.501 SO libspdk_scsi.so.9.0 00:01:48.501 SYMLINK libspdk_scsi.so 00:01:48.501 LIB libspdk_ublk.a 00:01:48.759 SO libspdk_ublk.so.3.0 00:01:48.759 SYMLINK libspdk_ublk.so 00:01:48.759 CC lib/vhost/vhost.o 00:01:48.759 CC lib/iscsi/conn.o 00:01:48.759 CC lib/vhost/vhost_rpc.o 00:01:48.759 CC lib/iscsi/init_grp.o 00:01:48.759 CC lib/vhost/vhost_scsi.o 00:01:48.759 CC lib/iscsi/iscsi.o 00:01:48.759 CC lib/vhost/vhost_blk.o 00:01:48.759 CC lib/iscsi/md5.o 00:01:48.759 CC lib/vhost/rte_vhost_user.o 00:01:48.759 CC lib/iscsi/param.o 00:01:48.759 CC lib/iscsi/portal_grp.o 00:01:48.759 CC lib/iscsi/tgt_node.o 00:01:48.759 CC lib/iscsi/iscsi_subsystem.o 00:01:48.759 CC lib/iscsi/iscsi_rpc.o 00:01:48.759 CC lib/iscsi/task.o 00:01:49.017 LIB libspdk_ftl.a 00:01:49.274 SO libspdk_ftl.so.9.0 00:01:49.530 SYMLINK libspdk_ftl.so 00:01:50.097 LIB libspdk_vhost.a 00:01:50.097 SO libspdk_vhost.so.8.0 00:01:50.097 LIB libspdk_nvmf.a 00:01:50.097 SYMLINK libspdk_vhost.so 00:01:50.097 SO libspdk_nvmf.so.19.0 00:01:50.097 LIB libspdk_iscsi.a 00:01:50.355 SO libspdk_iscsi.so.8.0 00:01:50.355 SYMLINK libspdk_nvmf.so 00:01:50.355 SYMLINK libspdk_iscsi.so 00:01:50.612 CC module/env_dpdk/env_dpdk_rpc.o 00:01:50.612 CC module/vfu_device/vfu_virtio.o 00:01:50.612 CC module/vfu_device/vfu_virtio_blk.o 00:01:50.612 CC module/vfu_device/vfu_virtio_scsi.o 00:01:50.612 CC module/vfu_device/vfu_virtio_rpc.o 00:01:50.612 CC module/accel/error/accel_error.o 00:01:50.612 CC module/scheduler/gscheduler/gscheduler.o 00:01:50.612 CC module/sock/posix/posix.o 00:01:50.612 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:50.612 CC module/accel/error/accel_error_rpc.o 00:01:50.612 CC module/accel/ioat/accel_ioat.o 00:01:50.612 CC module/accel/iaa/accel_iaa.o 00:01:50.612 CC module/keyring/linux/keyring.o 00:01:50.612 CC module/accel/ioat/accel_ioat_rpc.o 00:01:50.612 CC module/accel/iaa/accel_iaa_rpc.o 00:01:50.612 CC module/keyring/file/keyring.o 00:01:50.612 CC module/keyring/linux/keyring_rpc.o 00:01:50.612 CC module/keyring/file/keyring_rpc.o 00:01:50.613 CC module/accel/dsa/accel_dsa.o 00:01:50.613 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:50.613 CC module/blob/bdev/blob_bdev.o 00:01:50.613 CC module/accel/dsa/accel_dsa_rpc.o 00:01:50.870 LIB libspdk_env_dpdk_rpc.a 00:01:50.870 SO libspdk_env_dpdk_rpc.so.6.0 00:01:50.870 SYMLINK libspdk_env_dpdk_rpc.so 00:01:50.870 LIB libspdk_keyring_linux.a 00:01:50.870 LIB libspdk_keyring_file.a 00:01:50.870 LIB libspdk_scheduler_gscheduler.a 00:01:50.870 LIB libspdk_scheduler_dpdk_governor.a 00:01:50.870 SO libspdk_keyring_linux.so.1.0 00:01:50.870 SO libspdk_keyring_file.so.1.0 00:01:50.870 SO libspdk_scheduler_gscheduler.so.4.0 00:01:50.870 LIB libspdk_accel_error.a 00:01:50.870 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:50.870 LIB libspdk_scheduler_dynamic.a 00:01:50.870 LIB libspdk_accel_ioat.a 00:01:50.870 LIB libspdk_accel_iaa.a 00:01:50.870 SO libspdk_accel_error.so.2.0 00:01:50.870 SYMLINK libspdk_keyring_linux.so 00:01:50.870 SO libspdk_scheduler_dynamic.so.4.0 00:01:50.870 SYMLINK libspdk_keyring_file.so 00:01:50.870 SYMLINK libspdk_scheduler_gscheduler.so 00:01:51.128 SO libspdk_accel_ioat.so.6.0 00:01:51.128 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:51.128 SO libspdk_accel_iaa.so.3.0 00:01:51.128 LIB libspdk_accel_dsa.a 00:01:51.128 SYMLINK libspdk_scheduler_dynamic.so 00:01:51.128 LIB libspdk_blob_bdev.a 00:01:51.128 SYMLINK libspdk_accel_error.so 00:01:51.128 SYMLINK libspdk_accel_ioat.so 00:01:51.128 SO libspdk_accel_dsa.so.5.0 00:01:51.128 SO libspdk_blob_bdev.so.11.0 00:01:51.128 SYMLINK libspdk_accel_iaa.so 00:01:51.128 SYMLINK libspdk_blob_bdev.so 00:01:51.128 SYMLINK libspdk_accel_dsa.so 00:01:51.389 LIB libspdk_vfu_device.a 00:01:51.389 SO libspdk_vfu_device.so.3.0 00:01:51.389 CC module/bdev/gpt/gpt.o 00:01:51.389 CC module/blobfs/bdev/blobfs_bdev.o 00:01:51.389 CC module/bdev/error/vbdev_error.o 00:01:51.389 CC module/bdev/gpt/vbdev_gpt.o 00:01:51.389 CC module/bdev/passthru/vbdev_passthru.o 00:01:51.389 CC module/bdev/delay/vbdev_delay.o 00:01:51.389 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:51.389 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:51.389 CC module/bdev/error/vbdev_error_rpc.o 00:01:51.389 CC module/bdev/malloc/bdev_malloc.o 00:01:51.389 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:51.389 CC module/bdev/null/bdev_null.o 00:01:51.389 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:51.389 CC module/bdev/raid/bdev_raid.o 00:01:51.389 CC module/bdev/split/vbdev_split.o 00:01:51.389 CC module/bdev/null/bdev_null_rpc.o 00:01:51.389 CC module/bdev/nvme/bdev_nvme.o 00:01:51.389 CC module/bdev/split/vbdev_split_rpc.o 00:01:51.389 CC module/bdev/raid/bdev_raid_rpc.o 00:01:51.389 CC module/bdev/aio/bdev_aio.o 00:01:51.389 CC module/bdev/lvol/vbdev_lvol.o 00:01:51.389 CC module/bdev/raid/bdev_raid_sb.o 00:01:51.389 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:51.389 CC module/bdev/aio/bdev_aio_rpc.o 00:01:51.389 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:51.389 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:51.389 CC module/bdev/raid/raid0.o 00:01:51.389 CC module/bdev/nvme/nvme_rpc.o 00:01:51.389 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:51.389 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:51.389 CC module/bdev/raid/raid1.o 00:01:51.389 CC module/bdev/nvme/vbdev_opal.o 00:01:51.389 CC module/bdev/nvme/bdev_mdns_client.o 00:01:51.389 CC module/bdev/iscsi/bdev_iscsi.o 00:01:51.389 CC module/bdev/raid/concat.o 00:01:51.389 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:51.389 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:51.389 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:51.389 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:51.389 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:51.389 CC module/bdev/ftl/bdev_ftl.o 00:01:51.389 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:51.389 SYMLINK libspdk_vfu_device.so 00:01:51.647 LIB libspdk_sock_posix.a 00:01:51.647 SO libspdk_sock_posix.so.6.0 00:01:51.647 LIB libspdk_blobfs_bdev.a 00:01:51.647 SYMLINK libspdk_sock_posix.so 00:01:51.904 SO libspdk_blobfs_bdev.so.6.0 00:01:51.904 LIB libspdk_bdev_split.a 00:01:51.904 LIB libspdk_bdev_error.a 00:01:51.904 LIB libspdk_bdev_ftl.a 00:01:51.904 SO libspdk_bdev_split.so.6.0 00:01:51.904 SYMLINK libspdk_blobfs_bdev.so 00:01:51.904 LIB libspdk_bdev_gpt.a 00:01:51.904 SO libspdk_bdev_error.so.6.0 00:01:51.904 SO libspdk_bdev_ftl.so.6.0 00:01:51.904 LIB libspdk_bdev_null.a 00:01:51.904 SO libspdk_bdev_gpt.so.6.0 00:01:51.904 SYMLINK libspdk_bdev_split.so 00:01:51.904 SO libspdk_bdev_null.so.6.0 00:01:51.904 LIB libspdk_bdev_passthru.a 00:01:51.904 SYMLINK libspdk_bdev_error.so 00:01:51.904 SYMLINK libspdk_bdev_ftl.so 00:01:51.904 LIB libspdk_bdev_iscsi.a 00:01:51.904 SO libspdk_bdev_passthru.so.6.0 00:01:51.904 SYMLINK libspdk_bdev_gpt.so 00:01:51.904 SYMLINK libspdk_bdev_null.so 00:01:51.904 SO libspdk_bdev_iscsi.so.6.0 00:01:51.904 LIB libspdk_bdev_aio.a 00:01:51.904 LIB libspdk_bdev_zone_block.a 00:01:51.904 LIB libspdk_bdev_delay.a 00:01:51.904 SYMLINK libspdk_bdev_passthru.so 00:01:51.904 SO libspdk_bdev_aio.so.6.0 00:01:51.904 LIB libspdk_bdev_malloc.a 00:01:51.904 SO libspdk_bdev_zone_block.so.6.0 00:01:51.904 SO libspdk_bdev_delay.so.6.0 00:01:51.904 SYMLINK libspdk_bdev_iscsi.so 00:01:52.161 SO libspdk_bdev_malloc.so.6.0 00:01:52.161 SYMLINK libspdk_bdev_aio.so 00:01:52.161 SYMLINK libspdk_bdev_delay.so 00:01:52.161 SYMLINK libspdk_bdev_zone_block.so 00:01:52.161 LIB libspdk_bdev_lvol.a 00:01:52.161 SYMLINK libspdk_bdev_malloc.so 00:01:52.161 SO libspdk_bdev_lvol.so.6.0 00:01:52.161 SYMLINK libspdk_bdev_lvol.so 00:01:52.161 LIB libspdk_bdev_virtio.a 00:01:52.161 SO libspdk_bdev_virtio.so.6.0 00:01:52.419 SYMLINK libspdk_bdev_virtio.so 00:01:52.677 LIB libspdk_bdev_raid.a 00:01:52.677 SO libspdk_bdev_raid.so.6.0 00:01:52.677 SYMLINK libspdk_bdev_raid.so 00:01:54.051 LIB libspdk_bdev_nvme.a 00:01:54.051 SO libspdk_bdev_nvme.so.7.0 00:01:54.051 SYMLINK libspdk_bdev_nvme.so 00:01:54.309 CC module/event/subsystems/iobuf/iobuf.o 00:01:54.309 CC module/event/subsystems/scheduler/scheduler.o 00:01:54.309 CC module/event/subsystems/sock/sock.o 00:01:54.309 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:54.309 CC module/event/subsystems/vmd/vmd.o 00:01:54.309 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:54.310 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:54.310 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:54.310 CC module/event/subsystems/keyring/keyring.o 00:01:54.310 LIB libspdk_event_keyring.a 00:01:54.567 LIB libspdk_event_vhost_blk.a 00:01:54.567 LIB libspdk_event_vfu_tgt.a 00:01:54.567 LIB libspdk_event_scheduler.a 00:01:54.567 LIB libspdk_event_vmd.a 00:01:54.567 LIB libspdk_event_sock.a 00:01:54.567 SO libspdk_event_keyring.so.1.0 00:01:54.567 LIB libspdk_event_iobuf.a 00:01:54.567 SO libspdk_event_vhost_blk.so.3.0 00:01:54.567 SO libspdk_event_vfu_tgt.so.3.0 00:01:54.567 SO libspdk_event_scheduler.so.4.0 00:01:54.567 SO libspdk_event_sock.so.5.0 00:01:54.567 SO libspdk_event_vmd.so.6.0 00:01:54.567 SO libspdk_event_iobuf.so.3.0 00:01:54.567 SYMLINK libspdk_event_keyring.so 00:01:54.567 SYMLINK libspdk_event_vhost_blk.so 00:01:54.567 SYMLINK libspdk_event_scheduler.so 00:01:54.567 SYMLINK libspdk_event_vfu_tgt.so 00:01:54.567 SYMLINK libspdk_event_sock.so 00:01:54.567 SYMLINK libspdk_event_vmd.so 00:01:54.567 SYMLINK libspdk_event_iobuf.so 00:01:54.826 CC module/event/subsystems/accel/accel.o 00:01:54.826 LIB libspdk_event_accel.a 00:01:54.826 SO libspdk_event_accel.so.6.0 00:01:54.826 SYMLINK libspdk_event_accel.so 00:01:55.083 CC module/event/subsystems/bdev/bdev.o 00:01:55.342 LIB libspdk_event_bdev.a 00:01:55.342 SO libspdk_event_bdev.so.6.0 00:01:55.342 SYMLINK libspdk_event_bdev.so 00:01:55.600 CC module/event/subsystems/ublk/ublk.o 00:01:55.600 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:55.600 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:55.600 CC module/event/subsystems/nbd/nbd.o 00:01:55.600 CC module/event/subsystems/scsi/scsi.o 00:01:55.600 LIB libspdk_event_ublk.a 00:01:55.600 LIB libspdk_event_nbd.a 00:01:55.600 LIB libspdk_event_scsi.a 00:01:55.600 SO libspdk_event_nbd.so.6.0 00:01:55.600 SO libspdk_event_ublk.so.3.0 00:01:55.600 SO libspdk_event_scsi.so.6.0 00:01:55.857 SYMLINK libspdk_event_nbd.so 00:01:55.857 SYMLINK libspdk_event_ublk.so 00:01:55.857 SYMLINK libspdk_event_scsi.so 00:01:55.857 LIB libspdk_event_nvmf.a 00:01:55.857 SO libspdk_event_nvmf.so.6.0 00:01:55.857 SYMLINK libspdk_event_nvmf.so 00:01:55.857 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:55.857 CC module/event/subsystems/iscsi/iscsi.o 00:01:56.114 LIB libspdk_event_vhost_scsi.a 00:01:56.114 LIB libspdk_event_iscsi.a 00:01:56.114 SO libspdk_event_vhost_scsi.so.3.0 00:01:56.114 SO libspdk_event_iscsi.so.6.0 00:01:56.114 SYMLINK libspdk_event_vhost_scsi.so 00:01:56.114 SYMLINK libspdk_event_iscsi.so 00:01:56.371 SO libspdk.so.6.0 00:01:56.371 SYMLINK libspdk.so 00:01:56.371 CC app/trace_record/trace_record.o 00:01:56.371 CXX app/trace/trace.o 00:01:56.371 CC app/spdk_top/spdk_top.o 00:01:56.371 CC app/spdk_lspci/spdk_lspci.o 00:01:56.371 CC app/spdk_nvme_discover/discovery_aer.o 00:01:56.371 CC app/spdk_nvme_perf/perf.o 00:01:56.371 CC test/rpc_client/rpc_client_test.o 00:01:56.371 CC app/spdk_nvme_identify/identify.o 00:01:56.371 TEST_HEADER include/spdk/accel.h 00:01:56.371 TEST_HEADER include/spdk/accel_module.h 00:01:56.371 TEST_HEADER include/spdk/assert.h 00:01:56.371 TEST_HEADER include/spdk/barrier.h 00:01:56.371 TEST_HEADER include/spdk/base64.h 00:01:56.371 TEST_HEADER include/spdk/bdev.h 00:01:56.371 TEST_HEADER include/spdk/bdev_module.h 00:01:56.371 TEST_HEADER include/spdk/bdev_zone.h 00:01:56.371 TEST_HEADER include/spdk/bit_array.h 00:01:56.371 TEST_HEADER include/spdk/bit_pool.h 00:01:56.371 TEST_HEADER include/spdk/blob_bdev.h 00:01:56.371 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:56.371 TEST_HEADER include/spdk/blobfs.h 00:01:56.371 TEST_HEADER include/spdk/blob.h 00:01:56.371 TEST_HEADER include/spdk/conf.h 00:01:56.371 TEST_HEADER include/spdk/config.h 00:01:56.371 TEST_HEADER include/spdk/crc16.h 00:01:56.371 TEST_HEADER include/spdk/cpuset.h 00:01:56.371 TEST_HEADER include/spdk/crc32.h 00:01:56.371 TEST_HEADER include/spdk/crc64.h 00:01:56.371 TEST_HEADER include/spdk/dif.h 00:01:56.371 TEST_HEADER include/spdk/dma.h 00:01:56.371 TEST_HEADER include/spdk/endian.h 00:01:56.371 TEST_HEADER include/spdk/env_dpdk.h 00:01:56.371 TEST_HEADER include/spdk/env.h 00:01:56.371 TEST_HEADER include/spdk/event.h 00:01:56.371 TEST_HEADER include/spdk/fd_group.h 00:01:56.371 TEST_HEADER include/spdk/fd.h 00:01:56.371 TEST_HEADER include/spdk/file.h 00:01:56.371 TEST_HEADER include/spdk/ftl.h 00:01:56.371 TEST_HEADER include/spdk/hexlify.h 00:01:56.371 TEST_HEADER include/spdk/gpt_spec.h 00:01:56.371 TEST_HEADER include/spdk/histogram_data.h 00:01:56.371 TEST_HEADER include/spdk/idxd.h 00:01:56.371 TEST_HEADER include/spdk/idxd_spec.h 00:01:56.372 TEST_HEADER include/spdk/init.h 00:01:56.372 TEST_HEADER include/spdk/ioat.h 00:01:56.372 TEST_HEADER include/spdk/ioat_spec.h 00:01:56.372 TEST_HEADER include/spdk/iscsi_spec.h 00:01:56.372 TEST_HEADER include/spdk/json.h 00:01:56.372 TEST_HEADER include/spdk/jsonrpc.h 00:01:56.372 TEST_HEADER include/spdk/keyring.h 00:01:56.632 TEST_HEADER include/spdk/keyring_module.h 00:01:56.632 TEST_HEADER include/spdk/log.h 00:01:56.632 TEST_HEADER include/spdk/likely.h 00:01:56.632 TEST_HEADER include/spdk/lvol.h 00:01:56.632 TEST_HEADER include/spdk/memory.h 00:01:56.632 TEST_HEADER include/spdk/mmio.h 00:01:56.632 TEST_HEADER include/spdk/nbd.h 00:01:56.632 TEST_HEADER include/spdk/net.h 00:01:56.632 TEST_HEADER include/spdk/notify.h 00:01:56.632 TEST_HEADER include/spdk/nvme.h 00:01:56.632 TEST_HEADER include/spdk/nvme_intel.h 00:01:56.632 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:56.632 TEST_HEADER include/spdk/nvme_spec.h 00:01:56.632 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:56.632 TEST_HEADER include/spdk/nvme_zns.h 00:01:56.632 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:56.632 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:56.632 TEST_HEADER include/spdk/nvmf.h 00:01:56.632 TEST_HEADER include/spdk/nvmf_spec.h 00:01:56.632 TEST_HEADER include/spdk/opal.h 00:01:56.632 TEST_HEADER include/spdk/nvmf_transport.h 00:01:56.632 TEST_HEADER include/spdk/opal_spec.h 00:01:56.632 TEST_HEADER include/spdk/pci_ids.h 00:01:56.632 TEST_HEADER include/spdk/queue.h 00:01:56.632 TEST_HEADER include/spdk/pipe.h 00:01:56.632 TEST_HEADER include/spdk/reduce.h 00:01:56.632 TEST_HEADER include/spdk/scheduler.h 00:01:56.632 TEST_HEADER include/spdk/rpc.h 00:01:56.632 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:56.632 TEST_HEADER include/spdk/scsi.h 00:01:56.632 TEST_HEADER include/spdk/sock.h 00:01:56.632 TEST_HEADER include/spdk/scsi_spec.h 00:01:56.632 TEST_HEADER include/spdk/stdinc.h 00:01:56.632 TEST_HEADER include/spdk/string.h 00:01:56.632 TEST_HEADER include/spdk/thread.h 00:01:56.632 TEST_HEADER include/spdk/trace.h 00:01:56.632 TEST_HEADER include/spdk/trace_parser.h 00:01:56.632 TEST_HEADER include/spdk/tree.h 00:01:56.632 TEST_HEADER include/spdk/ublk.h 00:01:56.632 TEST_HEADER include/spdk/util.h 00:01:56.632 TEST_HEADER include/spdk/uuid.h 00:01:56.632 TEST_HEADER include/spdk/version.h 00:01:56.632 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:56.632 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:56.632 TEST_HEADER include/spdk/vhost.h 00:01:56.632 CC app/spdk_dd/spdk_dd.o 00:01:56.632 TEST_HEADER include/spdk/vmd.h 00:01:56.632 TEST_HEADER include/spdk/xor.h 00:01:56.632 TEST_HEADER include/spdk/zipf.h 00:01:56.632 CXX test/cpp_headers/accel.o 00:01:56.632 CXX test/cpp_headers/accel_module.o 00:01:56.632 CXX test/cpp_headers/assert.o 00:01:56.632 CXX test/cpp_headers/barrier.o 00:01:56.632 CXX test/cpp_headers/base64.o 00:01:56.632 CC app/iscsi_tgt/iscsi_tgt.o 00:01:56.632 CXX test/cpp_headers/bdev.o 00:01:56.632 CXX test/cpp_headers/bdev_module.o 00:01:56.632 CXX test/cpp_headers/bit_array.o 00:01:56.632 CXX test/cpp_headers/bdev_zone.o 00:01:56.632 CXX test/cpp_headers/bit_pool.o 00:01:56.632 CXX test/cpp_headers/blob_bdev.o 00:01:56.632 CXX test/cpp_headers/blobfs.o 00:01:56.632 CXX test/cpp_headers/blobfs_bdev.o 00:01:56.632 CXX test/cpp_headers/blob.o 00:01:56.632 CXX test/cpp_headers/conf.o 00:01:56.632 CXX test/cpp_headers/config.o 00:01:56.632 CXX test/cpp_headers/cpuset.o 00:01:56.632 CXX test/cpp_headers/crc16.o 00:01:56.632 CC app/nvmf_tgt/nvmf_main.o 00:01:56.632 CC app/spdk_tgt/spdk_tgt.o 00:01:56.632 CXX test/cpp_headers/crc32.o 00:01:56.632 CC test/env/pci/pci_ut.o 00:01:56.632 CC test/env/vtophys/vtophys.o 00:01:56.632 CC test/env/memory/memory_ut.o 00:01:56.632 CC test/app/histogram_perf/histogram_perf.o 00:01:56.632 CC test/app/jsoncat/jsoncat.o 00:01:56.632 CC app/fio/nvme/fio_plugin.o 00:01:56.632 CC examples/util/zipf/zipf.o 00:01:56.632 CC test/app/stub/stub.o 00:01:56.632 CC examples/ioat/verify/verify.o 00:01:56.632 CC test/thread/poller_perf/poller_perf.o 00:01:56.632 CC examples/ioat/perf/perf.o 00:01:56.632 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:56.632 CC test/dma/test_dma/test_dma.o 00:01:56.632 CC app/fio/bdev/fio_plugin.o 00:01:56.632 CC test/app/bdev_svc/bdev_svc.o 00:01:56.892 CC test/env/mem_callbacks/mem_callbacks.o 00:01:56.892 LINK spdk_lspci 00:01:56.892 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:56.892 LINK rpc_client_test 00:01:56.892 LINK spdk_nvme_discover 00:01:56.892 LINK interrupt_tgt 00:01:56.892 LINK jsoncat 00:01:56.892 LINK histogram_perf 00:01:56.892 LINK vtophys 00:01:56.892 LINK zipf 00:01:56.892 LINK poller_perf 00:01:56.892 CXX test/cpp_headers/crc64.o 00:01:56.892 CXX test/cpp_headers/dif.o 00:01:56.892 CXX test/cpp_headers/dma.o 00:01:56.892 CXX test/cpp_headers/endian.o 00:01:56.892 CXX test/cpp_headers/env_dpdk.o 00:01:56.892 CXX test/cpp_headers/env.o 00:01:56.892 CXX test/cpp_headers/event.o 00:01:56.892 LINK spdk_trace_record 00:01:56.892 CXX test/cpp_headers/fd_group.o 00:01:56.892 CXX test/cpp_headers/fd.o 00:01:56.892 LINK stub 00:01:56.892 LINK env_dpdk_post_init 00:01:57.151 CXX test/cpp_headers/file.o 00:01:57.151 LINK nvmf_tgt 00:01:57.151 LINK iscsi_tgt 00:01:57.151 CXX test/cpp_headers/ftl.o 00:01:57.151 CXX test/cpp_headers/gpt_spec.o 00:01:57.151 CXX test/cpp_headers/hexlify.o 00:01:57.151 LINK spdk_tgt 00:01:57.151 CXX test/cpp_headers/histogram_data.o 00:01:57.151 CXX test/cpp_headers/idxd.o 00:01:57.151 CXX test/cpp_headers/idxd_spec.o 00:01:57.151 LINK ioat_perf 00:01:57.151 LINK verify 00:01:57.151 LINK bdev_svc 00:01:57.151 CXX test/cpp_headers/init.o 00:01:57.151 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:57.151 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:57.151 CXX test/cpp_headers/ioat.o 00:01:57.151 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:57.151 LINK spdk_dd 00:01:57.412 CXX test/cpp_headers/ioat_spec.o 00:01:57.412 CXX test/cpp_headers/iscsi_spec.o 00:01:57.412 CXX test/cpp_headers/json.o 00:01:57.412 CXX test/cpp_headers/jsonrpc.o 00:01:57.412 CXX test/cpp_headers/keyring.o 00:01:57.412 CXX test/cpp_headers/keyring_module.o 00:01:57.412 CXX test/cpp_headers/likely.o 00:01:57.412 LINK spdk_trace 00:01:57.412 CXX test/cpp_headers/log.o 00:01:57.412 CXX test/cpp_headers/lvol.o 00:01:57.412 CXX test/cpp_headers/memory.o 00:01:57.412 LINK pci_ut 00:01:57.412 CXX test/cpp_headers/mmio.o 00:01:57.412 CXX test/cpp_headers/nbd.o 00:01:57.412 CXX test/cpp_headers/net.o 00:01:57.412 CXX test/cpp_headers/notify.o 00:01:57.412 CXX test/cpp_headers/nvme.o 00:01:57.412 CXX test/cpp_headers/nvme_intel.o 00:01:57.412 CXX test/cpp_headers/nvme_ocssd.o 00:01:57.412 LINK test_dma 00:01:57.412 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:57.412 CXX test/cpp_headers/nvme_spec.o 00:01:57.412 CXX test/cpp_headers/nvme_zns.o 00:01:57.412 CXX test/cpp_headers/nvmf_cmd.o 00:01:57.412 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:57.412 CXX test/cpp_headers/nvmf.o 00:01:57.412 CXX test/cpp_headers/nvmf_spec.o 00:01:57.412 CXX test/cpp_headers/nvmf_transport.o 00:01:57.412 CXX test/cpp_headers/opal.o 00:01:57.674 LINK nvme_fuzz 00:01:57.674 CXX test/cpp_headers/opal_spec.o 00:01:57.674 CC examples/sock/hello_world/hello_sock.o 00:01:57.674 CC test/event/event_perf/event_perf.o 00:01:57.674 CXX test/cpp_headers/pci_ids.o 00:01:57.674 CC test/event/reactor_perf/reactor_perf.o 00:01:57.674 CC test/event/reactor/reactor.o 00:01:57.674 CXX test/cpp_headers/pipe.o 00:01:57.674 CXX test/cpp_headers/queue.o 00:01:57.674 CXX test/cpp_headers/reduce.o 00:01:57.674 CXX test/cpp_headers/rpc.o 00:01:57.674 CC examples/idxd/perf/perf.o 00:01:57.674 CC examples/vmd/lsvmd/lsvmd.o 00:01:57.674 LINK spdk_nvme 00:01:57.674 LINK spdk_bdev 00:01:57.674 CC examples/thread/thread/thread_ex.o 00:01:57.674 CXX test/cpp_headers/scheduler.o 00:01:57.674 CXX test/cpp_headers/scsi.o 00:01:57.675 CC test/event/app_repeat/app_repeat.o 00:01:57.675 CXX test/cpp_headers/scsi_spec.o 00:01:57.675 CXX test/cpp_headers/sock.o 00:01:57.675 CC test/event/scheduler/scheduler.o 00:01:57.675 CXX test/cpp_headers/stdinc.o 00:01:57.933 CXX test/cpp_headers/string.o 00:01:57.933 CXX test/cpp_headers/thread.o 00:01:57.933 CXX test/cpp_headers/trace.o 00:01:57.933 CXX test/cpp_headers/trace_parser.o 00:01:57.933 CC examples/vmd/led/led.o 00:01:57.933 CXX test/cpp_headers/tree.o 00:01:57.933 CXX test/cpp_headers/ublk.o 00:01:57.933 CXX test/cpp_headers/util.o 00:01:57.933 CXX test/cpp_headers/uuid.o 00:01:57.933 CXX test/cpp_headers/version.o 00:01:57.933 CXX test/cpp_headers/vfio_user_pci.o 00:01:57.934 CXX test/cpp_headers/vfio_user_spec.o 00:01:57.934 CXX test/cpp_headers/vhost.o 00:01:57.934 CXX test/cpp_headers/vmd.o 00:01:57.934 CC app/vhost/vhost.o 00:01:57.934 CXX test/cpp_headers/xor.o 00:01:57.934 CXX test/cpp_headers/zipf.o 00:01:57.934 LINK reactor 00:01:57.934 LINK mem_callbacks 00:01:57.934 LINK vhost_fuzz 00:01:57.934 LINK event_perf 00:01:57.934 LINK reactor_perf 00:01:57.934 LINK lsvmd 00:01:57.934 LINK spdk_nvme_perf 00:01:58.192 LINK app_repeat 00:01:58.192 LINK spdk_nvme_identify 00:01:58.192 LINK spdk_top 00:01:58.192 LINK hello_sock 00:01:58.192 LINK led 00:01:58.192 CC test/nvme/overhead/overhead.o 00:01:58.192 CC test/nvme/reset/reset.o 00:01:58.192 CC test/nvme/sgl/sgl.o 00:01:58.192 CC test/nvme/aer/aer.o 00:01:58.192 CC test/nvme/err_injection/err_injection.o 00:01:58.192 CC test/nvme/startup/startup.o 00:01:58.192 CC test/nvme/reserve/reserve.o 00:01:58.192 CC test/nvme/e2edp/nvme_dp.o 00:01:58.192 LINK thread 00:01:58.192 CC test/blobfs/mkfs/mkfs.o 00:01:58.192 CC test/accel/dif/dif.o 00:01:58.192 LINK scheduler 00:01:58.192 CC test/nvme/simple_copy/simple_copy.o 00:01:58.192 CC test/nvme/connect_stress/connect_stress.o 00:01:58.192 CC test/nvme/boot_partition/boot_partition.o 00:01:58.451 CC test/nvme/compliance/nvme_compliance.o 00:01:58.451 CC test/lvol/esnap/esnap.o 00:01:58.451 CC test/nvme/cuse/cuse.o 00:01:58.451 CC test/nvme/fdp/fdp.o 00:01:58.451 CC test/nvme/fused_ordering/fused_ordering.o 00:01:58.451 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:58.451 LINK vhost 00:01:58.451 LINK idxd_perf 00:01:58.451 LINK err_injection 00:01:58.451 LINK reserve 00:01:58.451 LINK doorbell_aers 00:01:58.451 LINK startup 00:01:58.708 LINK sgl 00:01:58.708 LINK fused_ordering 00:01:58.708 LINK mkfs 00:01:58.708 CC examples/nvme/hotplug/hotplug.o 00:01:58.708 LINK simple_copy 00:01:58.708 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:58.708 CC examples/nvme/hello_world/hello_world.o 00:01:58.708 CC examples/nvme/arbitration/arbitration.o 00:01:58.708 CC examples/nvme/reconnect/reconnect.o 00:01:58.708 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:58.708 CC examples/nvme/abort/abort.o 00:01:58.708 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:58.708 LINK reset 00:01:58.708 LINK connect_stress 00:01:58.708 LINK boot_partition 00:01:58.708 LINK overhead 00:01:58.708 LINK aer 00:01:58.708 CC examples/accel/perf/accel_perf.o 00:01:58.708 LINK nvme_compliance 00:01:58.708 LINK nvme_dp 00:01:58.708 CC examples/blob/cli/blobcli.o 00:01:58.708 LINK memory_ut 00:01:58.708 CC examples/blob/hello_world/hello_blob.o 00:01:58.966 LINK pmr_persistence 00:01:58.966 LINK cmb_copy 00:01:58.966 LINK fdp 00:01:58.966 LINK hello_world 00:01:58.966 LINK hotplug 00:01:58.966 LINK dif 00:01:58.966 LINK arbitration 00:01:58.966 LINK abort 00:01:59.223 LINK hello_blob 00:01:59.223 LINK reconnect 00:01:59.223 LINK nvme_manage 00:01:59.223 LINK accel_perf 00:01:59.223 LINK blobcli 00:01:59.481 CC test/bdev/bdevio/bdevio.o 00:01:59.739 LINK iscsi_fuzz 00:01:59.739 CC examples/bdev/hello_world/hello_bdev.o 00:01:59.739 CC examples/bdev/bdevperf/bdevperf.o 00:01:59.739 LINK bdevio 00:01:59.739 LINK hello_bdev 00:01:59.997 LINK cuse 00:02:00.254 LINK bdevperf 00:02:00.819 CC examples/nvmf/nvmf/nvmf.o 00:02:01.077 LINK nvmf 00:02:03.665 LINK esnap 00:02:03.940 00:02:03.940 real 0m49.152s 00:02:03.940 user 10m9.574s 00:02:03.940 sys 2m28.976s 00:02:03.940 23:05:19 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:03.940 23:05:19 make -- common/autotest_common.sh@10 -- $ set +x 00:02:03.940 ************************************ 00:02:03.940 END TEST make 00:02:03.940 ************************************ 00:02:03.940 23:05:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:03.940 23:05:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:03.940 23:05:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:03.940 23:05:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:03.940 23:05:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.940 23:05:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:03.940 23:05:19 -- pm/common@44 -- $ pid=2126568 00:02:03.940 23:05:19 -- pm/common@50 -- $ kill -TERM 2126568 00:02:03.940 23:05:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.940 23:05:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:03.940 23:05:19 -- pm/common@44 -- $ pid=2126570 00:02:03.940 23:05:19 -- pm/common@50 -- $ kill -TERM 2126570 00:02:03.940 23:05:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.940 23:05:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:03.940 23:05:19 -- pm/common@44 -- $ pid=2126572 00:02:03.940 23:05:19 -- pm/common@50 -- $ kill -TERM 2126572 00:02:03.940 23:05:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.940 23:05:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:03.940 23:05:19 -- pm/common@44 -- $ pid=2126601 00:02:03.940 23:05:19 -- pm/common@50 -- $ sudo -E kill -TERM 2126601 00:02:03.940 23:05:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:03.940 23:05:19 -- nvmf/common.sh@7 -- # uname -s 00:02:03.940 23:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:03.940 23:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:03.940 23:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:03.940 23:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:03.940 23:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:03.940 23:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:03.940 23:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:03.940 23:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:03.940 23:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:03.940 23:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:03.941 23:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:03.941 23:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:03.941 23:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:03.941 23:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:03.941 23:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:03.941 23:05:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:03.941 23:05:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:03.941 23:05:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:03.941 23:05:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.941 23:05:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.941 23:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.941 23:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.941 23:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.941 23:05:19 -- paths/export.sh@5 -- # export PATH 00:02:03.941 23:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.941 23:05:19 -- nvmf/common.sh@47 -- # : 0 00:02:03.941 23:05:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:03.941 23:05:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:03.941 23:05:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:03.941 23:05:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:03.941 23:05:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:03.941 23:05:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:03.941 23:05:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:03.941 23:05:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:03.941 23:05:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:03.941 23:05:19 -- spdk/autotest.sh@32 -- # uname -s 00:02:03.941 23:05:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:03.941 23:05:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:03.941 23:05:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:03.941 23:05:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:03.941 23:05:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:03.941 23:05:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:03.941 23:05:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:03.941 23:05:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:03.941 23:05:19 -- spdk/autotest.sh@48 -- # udevadm_pid=2182045 00:02:03.941 23:05:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:03.941 23:05:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:03.941 23:05:19 -- pm/common@17 -- # local monitor 00:02:03.941 23:05:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.941 23:05:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.941 23:05:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.941 23:05:19 -- pm/common@21 -- # date +%s 00:02:03.941 23:05:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.941 23:05:19 -- pm/common@21 -- # date +%s 00:02:03.941 23:05:19 -- pm/common@25 -- # sleep 1 00:02:03.941 23:05:19 -- pm/common@21 -- # date +%s 00:02:03.941 23:05:19 -- pm/common@21 -- # date +%s 00:02:03.941 23:05:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721077519 00:02:03.941 23:05:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721077519 00:02:03.941 23:05:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721077519 00:02:03.941 23:05:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721077519 00:02:03.941 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721077519_collect-vmstat.pm.log 00:02:03.941 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721077519_collect-cpu-load.pm.log 00:02:03.941 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721077519_collect-cpu-temp.pm.log 00:02:03.941 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721077519_collect-bmc-pm.bmc.pm.log 00:02:05.325 23:05:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:05.325 23:05:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:05.325 23:05:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:05.325 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:02:05.325 23:05:20 -- spdk/autotest.sh@59 -- # create_test_list 00:02:05.325 23:05:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:05.325 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:02:05.325 23:05:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:05.325 23:05:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.325 23:05:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.325 23:05:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:05.325 23:05:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.325 23:05:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:05.325 23:05:20 -- common/autotest_common.sh@1455 -- # uname 00:02:05.325 23:05:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:05.325 23:05:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:05.325 23:05:20 -- common/autotest_common.sh@1475 -- # uname 00:02:05.325 23:05:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:05.325 23:05:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:05.325 23:05:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:05.325 23:05:20 -- spdk/autotest.sh@72 -- # hash lcov 00:02:05.325 23:05:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:05.325 23:05:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:05.325 --rc lcov_branch_coverage=1 00:02:05.325 --rc lcov_function_coverage=1 00:02:05.325 --rc genhtml_branch_coverage=1 00:02:05.325 --rc genhtml_function_coverage=1 00:02:05.325 --rc genhtml_legend=1 00:02:05.325 --rc geninfo_all_blocks=1 00:02:05.325 ' 00:02:05.325 23:05:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:05.325 --rc lcov_branch_coverage=1 00:02:05.325 --rc lcov_function_coverage=1 00:02:05.325 --rc genhtml_branch_coverage=1 00:02:05.325 --rc genhtml_function_coverage=1 00:02:05.325 --rc genhtml_legend=1 00:02:05.325 --rc geninfo_all_blocks=1 00:02:05.325 ' 00:02:05.325 23:05:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:05.325 --rc lcov_branch_coverage=1 00:02:05.325 --rc lcov_function_coverage=1 00:02:05.325 --rc genhtml_branch_coverage=1 00:02:05.325 --rc genhtml_function_coverage=1 00:02:05.325 --rc genhtml_legend=1 00:02:05.325 --rc geninfo_all_blocks=1 00:02:05.325 --no-external' 00:02:05.325 23:05:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:05.325 --rc lcov_branch_coverage=1 00:02:05.325 --rc lcov_function_coverage=1 00:02:05.325 --rc genhtml_branch_coverage=1 00:02:05.325 --rc genhtml_function_coverage=1 00:02:05.325 --rc genhtml_legend=1 00:02:05.325 --rc geninfo_all_blocks=1 00:02:05.325 --no-external' 00:02:05.325 23:05:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:05.325 lcov: LCOV version 1.14 00:02:05.325 23:05:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:07.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:07.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:07.221 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:07.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:07.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:07.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:25.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:25.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:43.615 23:05:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:43.615 23:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:43.615 23:05:55 -- common/autotest_common.sh@10 -- # set +x 00:02:43.615 23:05:55 -- spdk/autotest.sh@91 -- # rm -f 00:02:43.615 23:05:55 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.615 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:02:43.615 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:43.615 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:43.615 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:43.615 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:43.615 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:43.615 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:43.615 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:43.615 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:43.615 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:43.615 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:43.615 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:43.615 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:43.615 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:43.615 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:43.615 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:43.615 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:43.615 23:05:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:43.615 23:05:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:43.615 23:05:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:43.615 23:05:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:43.615 23:05:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:43.615 23:05:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:43.615 23:05:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:43.615 23:05:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.615 23:05:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:43.615 23:05:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:43.615 23:05:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.615 23:05:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:43.615 23:05:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:43.615 23:05:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:43.615 23:05:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.615 No valid GPT data, bailing 00:02:43.615 23:05:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.615 23:05:57 -- scripts/common.sh@391 -- # pt= 00:02:43.615 23:05:57 -- scripts/common.sh@392 -- # return 1 00:02:43.615 23:05:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.615 1+0 records in 00:02:43.615 1+0 records out 00:02:43.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403653 s, 260 MB/s 00:02:43.615 23:05:57 -- spdk/autotest.sh@118 -- # sync 00:02:43.615 23:05:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.615 23:05:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.615 23:05:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:43.615 23:05:58 -- spdk/autotest.sh@124 -- # uname -s 00:02:43.615 23:05:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:43.615 23:05:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.615 23:05:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:43.615 23:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.615 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:02:43.615 ************************************ 00:02:43.615 START TEST setup.sh 00:02:43.615 ************************************ 00:02:43.615 23:05:58 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.873 * Looking for test storage... 00:02:43.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.873 23:05:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:43.873 23:05:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:43.873 23:05:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:43.873 23:05:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:43.873 23:05:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.873 23:05:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.873 ************************************ 00:02:43.873 START TEST acl 00:02:43.873 ************************************ 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:43.873 * Looking for test storage... 00:02:43.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.873 23:05:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:43.873 23:05:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:43.873 23:05:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.873 23:05:59 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.243 23:06:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:45.243 23:06:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:45.243 23:06:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.243 23:06:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:45.243 23:06:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.243 23:06:00 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:46.207 Hugepages 00:02:46.207 node hugesize free / total 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.207 00:02:46.207 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:46.207 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:46.464 23:06:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:46.464 23:06:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.464 23:06:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.464 23:06:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:46.464 ************************************ 00:02:46.464 START TEST denied 00:02:46.464 ************************************ 00:02:46.464 23:06:01 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:46.464 23:06:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:02:46.464 23:06:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:46.464 23:06:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:02:46.464 23:06:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.464 23:06:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.836 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.836 23:06:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.366 00:02:50.366 real 0m3.793s 00:02:50.366 user 0m1.147s 00:02:50.366 sys 0m1.769s 00:02:50.366 23:06:05 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.366 23:06:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:50.366 ************************************ 00:02:50.366 END TEST denied 00:02:50.366 ************************************ 00:02:50.366 23:06:05 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:50.366 23:06:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:50.366 23:06:05 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.366 23:06:05 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.367 23:06:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.367 ************************************ 00:02:50.367 START TEST allowed 00:02:50.367 ************************************ 00:02:50.367 23:06:05 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:50.367 23:06:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:02:50.367 23:06:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:50.367 23:06:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:02:50.367 23:06:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.367 23:06:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.896 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:02:52.896 23:06:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:52.896 23:06:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:52.896 23:06:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:52.896 23:06:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.896 23:06:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.271 00:02:54.271 real 0m3.749s 00:02:54.271 user 0m0.972s 00:02:54.271 sys 0m1.649s 00:02:54.271 23:06:09 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.271 23:06:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:54.271 ************************************ 00:02:54.271 END TEST allowed 00:02:54.271 ************************************ 00:02:54.271 23:06:09 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:54.271 00:02:54.271 real 0m10.265s 00:02:54.271 user 0m3.168s 00:02:54.271 sys 0m5.179s 00:02:54.271 23:06:09 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.271 23:06:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:54.271 ************************************ 00:02:54.271 END TEST acl 00:02:54.271 ************************************ 00:02:54.271 23:06:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:54.271 23:06:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:54.271 23:06:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.271 23:06:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.271 23:06:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.271 ************************************ 00:02:54.271 START TEST hugepages 00:02:54.271 ************************************ 00:02:54.271 23:06:09 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:54.271 * Looking for test storage... 00:02:54.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 27120104 kB' 'MemAvailable: 30701760 kB' 'Buffers: 2704 kB' 'Cached: 10224432 kB' 'SwapCached: 0 kB' 'Active: 7241652 kB' 'Inactive: 3506296 kB' 'Active(anon): 6845944 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523992 kB' 'Mapped: 181276 kB' 'Shmem: 6325132 kB' 'KReclaimable: 182952 kB' 'Slab: 537116 kB' 'SReclaimable: 182952 kB' 'SUnreclaim: 354164 kB' 'KernelStack: 12496 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7980280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196084 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:54.273 23:06:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:54.273 23:06:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.273 23:06:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.273 23:06:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.273 ************************************ 00:02:54.273 START TEST default_setup 00:02:54.273 ************************************ 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.273 23:06:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.647 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:55.647 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:55.647 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:56.588 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29216712 kB' 'MemAvailable: 32798352 kB' 'Buffers: 2704 kB' 'Cached: 10224532 kB' 'SwapCached: 0 kB' 'Active: 7260888 kB' 'Inactive: 3506296 kB' 'Active(anon): 6865180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543236 kB' 'Mapped: 181256 kB' 'Shmem: 6325232 kB' 'KReclaimable: 182920 kB' 'Slab: 536744 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353824 kB' 'KernelStack: 12464 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.588 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29215384 kB' 'MemAvailable: 32797024 kB' 'Buffers: 2704 kB' 'Cached: 10224532 kB' 'SwapCached: 0 kB' 'Active: 7260200 kB' 'Inactive: 3506296 kB' 'Active(anon): 6864492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542556 kB' 'Mapped: 181380 kB' 'Shmem: 6325232 kB' 'KReclaimable: 182920 kB' 'Slab: 536912 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353992 kB' 'KernelStack: 12464 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.589 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.590 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29214756 kB' 'MemAvailable: 32796396 kB' 'Buffers: 2704 kB' 'Cached: 10224536 kB' 'SwapCached: 0 kB' 'Active: 7259640 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863932 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542004 kB' 'Mapped: 181356 kB' 'Shmem: 6325236 kB' 'KReclaimable: 182920 kB' 'Slab: 536920 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354000 kB' 'KernelStack: 12400 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.591 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.592 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.593 nr_hugepages=1024 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.593 resv_hugepages=0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.593 surplus_hugepages=0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.593 anon_hugepages=0 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29216220 kB' 'MemAvailable: 32797860 kB' 'Buffers: 2704 kB' 'Cached: 10224576 kB' 'SwapCached: 0 kB' 'Active: 7259280 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863572 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541580 kB' 'Mapped: 181356 kB' 'Shmem: 6325276 kB' 'KReclaimable: 182920 kB' 'Slab: 536920 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354000 kB' 'KernelStack: 12384 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.593 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.594 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20076200 kB' 'MemUsed: 4496156 kB' 'SwapCached: 0 kB' 'Active: 1512552 kB' 'Inactive: 74428 kB' 'Active(anon): 1378396 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241368 kB' 'Mapped: 76020 kB' 'AnonPages: 348756 kB' 'Shmem: 1032784 kB' 'KernelStack: 7352 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 219876 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 169976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.595 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.596 node0=1024 expecting 1024 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.596 00:02:56.596 real 0m2.425s 00:02:56.596 user 0m0.672s 00:02:56.596 sys 0m0.874s 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.596 23:06:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:56.596 ************************************ 00:02:56.596 END TEST default_setup 00:02:56.596 ************************************ 00:02:56.596 23:06:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:56.596 23:06:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:56.596 23:06:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.596 23:06:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.596 23:06:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.854 ************************************ 00:02:56.854 START TEST per_node_1G_alloc 00:02:56.854 ************************************ 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.854 23:06:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.803 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.803 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.803 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.803 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.803 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.803 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.803 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.803 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.803 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:57.803 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.803 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.803 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.803 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.803 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.803 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.803 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.803 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.103 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29204716 kB' 'MemAvailable: 32786356 kB' 'Buffers: 2704 kB' 'Cached: 10224644 kB' 'SwapCached: 0 kB' 'Active: 7259964 kB' 'Inactive: 3506296 kB' 'Active(anon): 6864256 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542180 kB' 'Mapped: 181472 kB' 'Shmem: 6325344 kB' 'KReclaimable: 182920 kB' 'Slab: 537172 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354252 kB' 'KernelStack: 12400 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.104 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.105 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29204996 kB' 'MemAvailable: 32786636 kB' 'Buffers: 2704 kB' 'Cached: 10224648 kB' 'SwapCached: 0 kB' 'Active: 7259556 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863848 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541708 kB' 'Mapped: 181372 kB' 'Shmem: 6325348 kB' 'KReclaimable: 182920 kB' 'Slab: 537156 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354236 kB' 'KernelStack: 12400 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.106 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.107 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29204996 kB' 'MemAvailable: 32786636 kB' 'Buffers: 2704 kB' 'Cached: 10224664 kB' 'SwapCached: 0 kB' 'Active: 7259588 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863880 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541712 kB' 'Mapped: 181372 kB' 'Shmem: 6325364 kB' 'KReclaimable: 182920 kB' 'Slab: 537156 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354236 kB' 'KernelStack: 12416 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.108 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.109 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.112 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.113 nr_hugepages=1024 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.113 resv_hugepages=0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.113 surplus_hugepages=0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.113 anon_hugepages=0 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29204244 kB' 'MemAvailable: 32785884 kB' 'Buffers: 2704 kB' 'Cached: 10224688 kB' 'SwapCached: 0 kB' 'Active: 7259612 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863904 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541720 kB' 'Mapped: 181372 kB' 'Shmem: 6325388 kB' 'KReclaimable: 182920 kB' 'Slab: 537156 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354236 kB' 'KernelStack: 12416 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8001532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.113 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.114 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21114640 kB' 'MemUsed: 3457716 kB' 'SwapCached: 0 kB' 'Active: 1513424 kB' 'Inactive: 74428 kB' 'Active(anon): 1379268 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241480 kB' 'Mapped: 76036 kB' 'AnonPages: 349496 kB' 'Shmem: 1032896 kB' 'KernelStack: 7416 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 220036 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 170136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.115 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8089604 kB' 'MemUsed: 11364712 kB' 'SwapCached: 0 kB' 'Active: 5746184 kB' 'Inactive: 3431868 kB' 'Active(anon): 5484632 kB' 'Inactive(anon): 0 kB' 'Active(file): 261552 kB' 'Inactive(file): 3431868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8985936 kB' 'Mapped: 105336 kB' 'AnonPages: 192184 kB' 'Shmem: 5292516 kB' 'KernelStack: 4984 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133020 kB' 'Slab: 317120 kB' 'SReclaimable: 133020 kB' 'SUnreclaim: 184100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.116 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:58.117 node0=512 expecting 512 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:58.117 node1=512 expecting 512 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:58.117 00:02:58.117 real 0m1.422s 00:02:58.117 user 0m0.611s 00:02:58.117 sys 0m0.783s 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:58.117 23:06:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:58.117 ************************************ 00:02:58.117 END TEST per_node_1G_alloc 00:02:58.117 ************************************ 00:02:58.117 23:06:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:58.117 23:06:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:58.117 23:06:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.117 23:06:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.117 23:06:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:58.117 ************************************ 00:02:58.117 START TEST even_2G_alloc 00:02:58.117 ************************************ 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.117 23:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.494 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:59.494 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.494 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:59.494 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:59.494 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:59.494 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:59.494 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:59.494 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:59.494 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:59.494 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:59.494 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:59.494 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:59.494 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:59.494 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:59.494 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:59.494 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:59.494 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29230836 kB' 'MemAvailable: 32812476 kB' 'Buffers: 2704 kB' 'Cached: 10224776 kB' 'SwapCached: 0 kB' 'Active: 7262320 kB' 'Inactive: 3506296 kB' 'Active(anon): 6866612 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544356 kB' 'Mapped: 181924 kB' 'Shmem: 6325476 kB' 'KReclaimable: 182920 kB' 'Slab: 537392 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354472 kB' 'KernelStack: 12448 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8004428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.494 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.495 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29227812 kB' 'MemAvailable: 32809452 kB' 'Buffers: 2704 kB' 'Cached: 10224780 kB' 'SwapCached: 0 kB' 'Active: 7264888 kB' 'Inactive: 3506296 kB' 'Active(anon): 6869180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546984 kB' 'Mapped: 181924 kB' 'Shmem: 6325480 kB' 'KReclaimable: 182920 kB' 'Slab: 537392 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354472 kB' 'KernelStack: 12448 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8006820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.496 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29223856 kB' 'MemAvailable: 32805496 kB' 'Buffers: 2704 kB' 'Cached: 10224796 kB' 'SwapCached: 0 kB' 'Active: 7265388 kB' 'Inactive: 3506296 kB' 'Active(anon): 6869680 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547436 kB' 'Mapped: 182304 kB' 'Shmem: 6325496 kB' 'KReclaimable: 182920 kB' 'Slab: 537368 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354448 kB' 'KernelStack: 12448 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8007776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196248 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.497 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.498 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.759 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.760 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.761 nr_hugepages=1024 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.761 resv_hugepages=0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.761 surplus_hugepages=0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.761 anon_hugepages=0 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29227476 kB' 'MemAvailable: 32809116 kB' 'Buffers: 2704 kB' 'Cached: 10224820 kB' 'SwapCached: 0 kB' 'Active: 7261808 kB' 'Inactive: 3506296 kB' 'Active(anon): 6866100 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543868 kB' 'Mapped: 181824 kB' 'Shmem: 6325520 kB' 'KReclaimable: 182920 kB' 'Slab: 537368 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354448 kB' 'KernelStack: 12432 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 8004620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.761 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.762 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21130112 kB' 'MemUsed: 3442244 kB' 'SwapCached: 0 kB' 'Active: 1514144 kB' 'Inactive: 74428 kB' 'Active(anon): 1379988 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241588 kB' 'Mapped: 76048 kB' 'AnonPages: 350124 kB' 'Shmem: 1033004 kB' 'KernelStack: 7464 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 219996 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 170096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.763 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8094776 kB' 'MemUsed: 11359540 kB' 'SwapCached: 0 kB' 'Active: 5746280 kB' 'Inactive: 3431868 kB' 'Active(anon): 5484728 kB' 'Inactive(anon): 0 kB' 'Active(file): 261552 kB' 'Inactive(file): 3431868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8985936 kB' 'Mapped: 105672 kB' 'AnonPages: 192296 kB' 'Shmem: 5292516 kB' 'KernelStack: 4984 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133020 kB' 'Slab: 317372 kB' 'SReclaimable: 133020 kB' 'SUnreclaim: 184352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.764 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.765 node0=512 expecting 512 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:59.765 node1=512 expecting 512 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:59.765 00:02:59.765 real 0m1.514s 00:02:59.765 user 0m0.627s 00:02:59.765 sys 0m0.857s 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.765 23:06:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.765 ************************************ 00:02:59.765 END TEST even_2G_alloc 00:02:59.765 ************************************ 00:02:59.765 23:06:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:59.765 23:06:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:59.765 23:06:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.765 23:06:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.765 23:06:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.765 ************************************ 00:02:59.765 START TEST odd_alloc 00:02:59.765 ************************************ 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.765 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.766 23:06:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.145 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.145 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.145 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.145 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.145 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.145 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.145 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.145 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.145 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.145 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.145 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.145 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.145 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.145 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.145 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.145 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.145 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29235924 kB' 'MemAvailable: 32817564 kB' 'Buffers: 2704 kB' 'Cached: 10224916 kB' 'SwapCached: 0 kB' 'Active: 7259032 kB' 'Inactive: 3506296 kB' 'Active(anon): 6863324 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540892 kB' 'Mapped: 180588 kB' 'Shmem: 6325616 kB' 'KReclaimable: 182920 kB' 'Slab: 537172 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354252 kB' 'KernelStack: 12848 kB' 'PageTables: 9720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7990920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.145 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29238200 kB' 'MemAvailable: 32819840 kB' 'Buffers: 2704 kB' 'Cached: 10224916 kB' 'SwapCached: 0 kB' 'Active: 7258672 kB' 'Inactive: 3506296 kB' 'Active(anon): 6862964 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540448 kB' 'Mapped: 180548 kB' 'Shmem: 6325616 kB' 'KReclaimable: 182920 kB' 'Slab: 537176 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354256 kB' 'KernelStack: 12688 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7990936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.146 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.147 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29238908 kB' 'MemAvailable: 32820548 kB' 'Buffers: 2704 kB' 'Cached: 10224932 kB' 'SwapCached: 0 kB' 'Active: 7257608 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861900 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539384 kB' 'Mapped: 180548 kB' 'Shmem: 6325632 kB' 'KReclaimable: 182920 kB' 'Slab: 537152 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354232 kB' 'KernelStack: 12512 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7988596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.148 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.149 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:01.150 nr_hugepages=1025 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.150 resv_hugepages=0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.150 surplus_hugepages=0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.150 anon_hugepages=0 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29240092 kB' 'MemAvailable: 32821732 kB' 'Buffers: 2704 kB' 'Cached: 10224952 kB' 'SwapCached: 0 kB' 'Active: 7257304 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539084 kB' 'Mapped: 180548 kB' 'Shmem: 6325652 kB' 'KReclaimable: 182920 kB' 'Slab: 537152 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 354232 kB' 'KernelStack: 12368 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7988616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.150 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.151 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21137676 kB' 'MemUsed: 3434680 kB' 'SwapCached: 0 kB' 'Active: 1510768 kB' 'Inactive: 74428 kB' 'Active(anon): 1376612 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241732 kB' 'Mapped: 76060 kB' 'AnonPages: 346584 kB' 'Shmem: 1033148 kB' 'KernelStack: 7384 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 219860 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 169960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.152 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8101916 kB' 'MemUsed: 11352400 kB' 'SwapCached: 0 kB' 'Active: 5746264 kB' 'Inactive: 3431868 kB' 'Active(anon): 5484712 kB' 'Inactive(anon): 0 kB' 'Active(file): 261552 kB' 'Inactive(file): 3431868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8985948 kB' 'Mapped: 104488 kB' 'AnonPages: 192264 kB' 'Shmem: 5292528 kB' 'KernelStack: 5032 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133020 kB' 'Slab: 317288 kB' 'SReclaimable: 133020 kB' 'SUnreclaim: 184268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.153 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:01.154 node0=512 expecting 513 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:01.154 node1=513 expecting 512 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:01.154 00:03:01.154 real 0m1.427s 00:03:01.154 user 0m0.569s 00:03:01.154 sys 0m0.831s 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.154 23:06:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.154 ************************************ 00:03:01.154 END TEST odd_alloc 00:03:01.154 ************************************ 00:03:01.154 23:06:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:01.154 23:06:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:01.154 23:06:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.154 23:06:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.154 23:06:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.154 ************************************ 00:03:01.154 START TEST custom_alloc 00:03:01.154 ************************************ 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:01.154 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.155 23:06:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.528 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.528 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.528 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.528 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.528 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.528 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.528 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.528 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.528 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.528 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.528 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.528 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.528 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.528 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.528 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.528 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.528 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.528 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28210700 kB' 'MemAvailable: 31792340 kB' 'Buffers: 2704 kB' 'Cached: 10225044 kB' 'SwapCached: 0 kB' 'Active: 7257696 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861988 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539380 kB' 'Mapped: 180660 kB' 'Shmem: 6325744 kB' 'KReclaimable: 182920 kB' 'Slab: 536840 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353920 kB' 'KernelStack: 12432 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7988812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28210260 kB' 'MemAvailable: 31791900 kB' 'Buffers: 2704 kB' 'Cached: 10225048 kB' 'SwapCached: 0 kB' 'Active: 7257096 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538772 kB' 'Mapped: 180632 kB' 'Shmem: 6325748 kB' 'KReclaimable: 182920 kB' 'Slab: 536824 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353904 kB' 'KernelStack: 12384 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7988832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.529 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28210260 kB' 'MemAvailable: 31791900 kB' 'Buffers: 2704 kB' 'Cached: 10225064 kB' 'SwapCached: 0 kB' 'Active: 7256960 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861252 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538588 kB' 'Mapped: 180556 kB' 'Shmem: 6325764 kB' 'KReclaimable: 182920 kB' 'Slab: 536848 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353928 kB' 'KernelStack: 12368 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7988852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.530 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:02.531 nr_hugepages=1536 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.531 resv_hugepages=0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.531 surplus_hugepages=0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.531 anon_hugepages=0 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28210260 kB' 'MemAvailable: 31791900 kB' 'Buffers: 2704 kB' 'Cached: 10225068 kB' 'SwapCached: 0 kB' 'Active: 7257092 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861384 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538716 kB' 'Mapped: 180556 kB' 'Shmem: 6325768 kB' 'KReclaimable: 182920 kB' 'Slab: 536848 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353928 kB' 'KernelStack: 12352 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7988872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21147940 kB' 'MemUsed: 3424416 kB' 'SwapCached: 0 kB' 'Active: 1510908 kB' 'Inactive: 74428 kB' 'Active(anon): 1376752 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241860 kB' 'Mapped: 76072 kB' 'AnonPages: 346588 kB' 'Shmem: 1033276 kB' 'KernelStack: 7384 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 219740 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 169840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7062068 kB' 'MemUsed: 12392248 kB' 'SwapCached: 0 kB' 'Active: 5746132 kB' 'Inactive: 3431868 kB' 'Active(anon): 5484580 kB' 'Inactive(anon): 0 kB' 'Active(file): 261552 kB' 'Inactive(file): 3431868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8985948 kB' 'Mapped: 104484 kB' 'AnonPages: 192076 kB' 'Shmem: 5292528 kB' 'KernelStack: 5016 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133020 kB' 'Slab: 317108 kB' 'SReclaimable: 133020 kB' 'SUnreclaim: 184088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.533 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.534 node0=512 expecting 512 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:02.534 node1=1024 expecting 1024 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:02.534 00:03:02.534 real 0m1.382s 00:03:02.534 user 0m0.578s 00:03:02.534 sys 0m0.769s 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.534 23:06:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:02.534 ************************************ 00:03:02.534 END TEST custom_alloc 00:03:02.534 ************************************ 00:03:02.534 23:06:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:02.534 23:06:17 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:02.534 23:06:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.534 23:06:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.534 23:06:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.534 ************************************ 00:03:02.534 START TEST no_shrink_alloc 00:03:02.534 ************************************ 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.534 23:06:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.906 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.906 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.906 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.906 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.906 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.906 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.906 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.906 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.906 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.907 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.907 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.907 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.907 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.907 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.907 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.907 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.907 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29239344 kB' 'MemAvailable: 32820984 kB' 'Buffers: 2704 kB' 'Cached: 10225172 kB' 'SwapCached: 0 kB' 'Active: 7257488 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861780 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539064 kB' 'Mapped: 180596 kB' 'Shmem: 6325872 kB' 'KReclaimable: 182920 kB' 'Slab: 536776 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353856 kB' 'KernelStack: 12384 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.907 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29240012 kB' 'MemAvailable: 32821652 kB' 'Buffers: 2704 kB' 'Cached: 10225172 kB' 'SwapCached: 0 kB' 'Active: 7257788 kB' 'Inactive: 3506296 kB' 'Active(anon): 6862080 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539404 kB' 'Mapped: 180652 kB' 'Shmem: 6325872 kB' 'KReclaimable: 182920 kB' 'Slab: 536760 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353840 kB' 'KernelStack: 12416 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.908 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29240324 kB' 'MemAvailable: 32821964 kB' 'Buffers: 2704 kB' 'Cached: 10225192 kB' 'SwapCached: 0 kB' 'Active: 7257328 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861620 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538892 kB' 'Mapped: 180572 kB' 'Shmem: 6325892 kB' 'KReclaimable: 182920 kB' 'Slab: 536768 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353848 kB' 'KernelStack: 12384 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.909 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.910 nr_hugepages=1024 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.910 resv_hugepages=0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.910 surplus_hugepages=0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.910 anon_hugepages=0 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29240324 kB' 'MemAvailable: 32821964 kB' 'Buffers: 2704 kB' 'Cached: 10225216 kB' 'SwapCached: 0 kB' 'Active: 7257332 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861624 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538856 kB' 'Mapped: 180572 kB' 'Shmem: 6325916 kB' 'KReclaimable: 182920 kB' 'Slab: 536768 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353848 kB' 'KernelStack: 12368 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.910 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.911 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20096468 kB' 'MemUsed: 4475888 kB' 'SwapCached: 0 kB' 'Active: 1510840 kB' 'Inactive: 74428 kB' 'Active(anon): 1376684 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1241980 kB' 'Mapped: 76080 kB' 'AnonPages: 346416 kB' 'Shmem: 1033396 kB' 'KernelStack: 7320 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49900 kB' 'Slab: 219680 kB' 'SReclaimable: 49900 kB' 'SUnreclaim: 169780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.912 node0=1024 expecting 1024 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.912 23:06:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.287 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.287 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.287 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.287 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.287 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.287 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.287 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.287 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.287 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.287 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.287 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.287 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.287 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.287 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.287 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.287 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.287 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.287 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.287 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29250240 kB' 'MemAvailable: 32831880 kB' 'Buffers: 2704 kB' 'Cached: 10225288 kB' 'SwapCached: 0 kB' 'Active: 7257528 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861820 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538988 kB' 'Mapped: 180704 kB' 'Shmem: 6325988 kB' 'KReclaimable: 182920 kB' 'Slab: 536492 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353572 kB' 'KernelStack: 12432 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196292 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.288 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29250852 kB' 'MemAvailable: 32832492 kB' 'Buffers: 2704 kB' 'Cached: 10225288 kB' 'SwapCached: 0 kB' 'Active: 7257172 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861464 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538660 kB' 'Mapped: 180656 kB' 'Shmem: 6325988 kB' 'KReclaimable: 182920 kB' 'Slab: 536492 kB' 'SReclaimable: 182920 kB' 'SUnreclaim: 353572 kB' 'KernelStack: 12416 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.289 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.290 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29251044 kB' 'MemAvailable: 32832668 kB' 'Buffers: 2704 kB' 'Cached: 10225296 kB' 'SwapCached: 0 kB' 'Active: 7256776 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861068 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538268 kB' 'Mapped: 180580 kB' 'Shmem: 6325996 kB' 'KReclaimable: 182888 kB' 'Slab: 536444 kB' 'SReclaimable: 182888 kB' 'SUnreclaim: 353556 kB' 'KernelStack: 12432 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.291 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.292 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.293 nr_hugepages=1024 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.293 resv_hugepages=0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.293 surplus_hugepages=0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.293 anon_hugepages=0 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29251100 kB' 'MemAvailable: 32832724 kB' 'Buffers: 2704 kB' 'Cached: 10225328 kB' 'SwapCached: 0 kB' 'Active: 7257052 kB' 'Inactive: 3506296 kB' 'Active(anon): 6861344 kB' 'Inactive(anon): 0 kB' 'Active(file): 395708 kB' 'Inactive(file): 3506296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538512 kB' 'Mapped: 180580 kB' 'Shmem: 6326028 kB' 'KReclaimable: 182888 kB' 'Slab: 536444 kB' 'SReclaimable: 182888 kB' 'SUnreclaim: 353556 kB' 'KernelStack: 12416 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7989708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1713756 kB' 'DirectMap2M: 17080320 kB' 'DirectMap1G: 33554432 kB' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.293 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.294 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20104188 kB' 'MemUsed: 4468168 kB' 'SwapCached: 0 kB' 'Active: 1510668 kB' 'Inactive: 74428 kB' 'Active(anon): 1376512 kB' 'Inactive(anon): 0 kB' 'Active(file): 134156 kB' 'Inactive(file): 74428 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1242088 kB' 'Mapped: 76088 kB' 'AnonPages: 346172 kB' 'Shmem: 1033504 kB' 'KernelStack: 7400 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 49868 kB' 'Slab: 219596 kB' 'SReclaimable: 49868 kB' 'SUnreclaim: 169728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.295 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:05.296 node0=1024 expecting 1024 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:05.296 00:03:05.296 real 0m2.730s 00:03:05.296 user 0m1.175s 00:03:05.296 sys 0m1.495s 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.296 23:06:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.296 ************************************ 00:03:05.296 END TEST no_shrink_alloc 00:03:05.296 ************************************ 00:03:05.296 23:06:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:05.296 23:06:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:05.554 00:03:05.554 real 0m11.281s 00:03:05.554 user 0m4.407s 00:03:05.554 sys 0m5.835s 00:03:05.554 23:06:20 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.554 23:06:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.554 ************************************ 00:03:05.554 END TEST hugepages 00:03:05.554 ************************************ 00:03:05.554 23:06:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:05.554 23:06:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.554 23:06:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.554 23:06:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.554 23:06:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.554 ************************************ 00:03:05.554 START TEST driver 00:03:05.554 ************************************ 00:03:05.554 23:06:20 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.554 * Looking for test storage... 00:03:05.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.554 23:06:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:05.554 23:06:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.554 23:06:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.107 23:06:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:08.107 23:06:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.107 23:06:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.107 23:06:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:08.107 ************************************ 00:03:08.107 START TEST guess_driver 00:03:08.107 ************************************ 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:08.107 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:08.107 Looking for driver=vfio-pci 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.107 23:06:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.048 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.307 23:06:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.244 23:06:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.774 00:03:12.774 real 0m4.835s 00:03:12.774 user 0m1.077s 00:03:12.774 sys 0m1.899s 00:03:12.774 23:06:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.774 23:06:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:12.774 ************************************ 00:03:12.774 END TEST guess_driver 00:03:12.774 ************************************ 00:03:12.774 23:06:27 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:12.774 00:03:12.774 real 0m7.284s 00:03:12.774 user 0m1.635s 00:03:12.774 sys 0m2.840s 00:03:12.774 23:06:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.774 23:06:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:12.774 ************************************ 00:03:12.774 END TEST driver 00:03:12.774 ************************************ 00:03:12.774 23:06:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:12.774 23:06:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:12.774 23:06:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.774 23:06:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.774 23:06:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.774 ************************************ 00:03:12.774 START TEST devices 00:03:12.774 ************************************ 00:03:12.774 23:06:27 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:12.774 * Looking for test storage... 00:03:12.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.774 23:06:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:12.774 23:06:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:12.774 23:06:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.774 23:06:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:14.676 23:06:29 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:14.676 No valid GPT data, bailing 00:03:14.676 23:06:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:14.676 23:06:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:14.676 23:06:29 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.676 23:06:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:14.676 ************************************ 00:03:14.676 START TEST nvme_mount 00:03:14.676 ************************************ 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:14.676 23:06:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:15.611 Creating new GPT entries in memory. 00:03:15.611 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:15.611 other utilities. 00:03:15.611 23:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:15.611 23:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.611 23:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:15.611 23:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:15.611 23:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:16.545 Creating new GPT entries in memory. 00:03:16.545 The operation has completed successfully. 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2202843 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.545 23:06:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.920 23:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:17.920 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.920 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.920 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.178 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.178 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.178 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.178 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.178 23:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.113 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.113 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:19.113 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:19.113 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:19.114 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.372 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.373 23:06:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:20.751 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:20.751 00:03:20.751 real 0m6.377s 00:03:20.751 user 0m1.546s 00:03:20.751 sys 0m2.435s 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.751 23:06:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:20.751 ************************************ 00:03:20.751 END TEST nvme_mount 00:03:20.751 ************************************ 00:03:20.751 23:06:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:20.751 23:06:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:20.751 23:06:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.751 23:06:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.751 23:06:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:20.751 ************************************ 00:03:20.751 START TEST dm_mount 00:03:20.751 ************************************ 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:20.751 23:06:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:22.126 Creating new GPT entries in memory. 00:03:22.126 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.126 other utilities. 00:03:22.126 23:06:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.126 23:06:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.126 23:06:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.126 23:06:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.126 23:06:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:22.742 Creating new GPT entries in memory. 00:03:22.742 The operation has completed successfully. 00:03:22.742 23:06:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:22.742 23:06:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.742 23:06:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.742 23:06:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.742 23:06:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:24.117 The operation has completed successfully. 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2205251 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:24.117 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.118 23:06:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.048 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.048 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:25.048 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:25.048 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.048 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.049 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.305 23:06:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.235 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.235 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:26.236 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:26.494 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:26.494 00:03:26.494 real 0m5.709s 00:03:26.494 user 0m0.954s 00:03:26.494 sys 0m1.614s 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.494 23:06:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:26.494 ************************************ 00:03:26.494 END TEST dm_mount 00:03:26.494 ************************************ 00:03:26.494 23:06:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.494 23:06:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:26.752 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:26.752 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:26.752 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:26.752 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.752 23:06:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:26.752 00:03:26.752 real 0m14.021s 00:03:26.752 user 0m3.204s 00:03:26.752 sys 0m5.059s 00:03:26.752 23:06:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.752 23:06:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:26.752 ************************************ 00:03:26.752 END TEST devices 00:03:26.752 ************************************ 00:03:26.752 23:06:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:26.752 00:03:26.752 real 0m43.094s 00:03:26.752 user 0m12.518s 00:03:26.752 sys 0m19.070s 00:03:26.752 23:06:42 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.752 23:06:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.752 ************************************ 00:03:26.752 END TEST setup.sh 00:03:26.752 ************************************ 00:03:26.752 23:06:42 -- common/autotest_common.sh@1142 -- # return 0 00:03:26.752 23:06:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:28.129 Hugepages 00:03:28.129 node hugesize free / total 00:03:28.129 node0 1048576kB 0 / 0 00:03:28.129 node0 2048kB 2048 / 2048 00:03:28.129 node1 1048576kB 0 / 0 00:03:28.129 node1 2048kB 0 / 0 00:03:28.129 00:03:28.129 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.129 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:28.129 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:28.129 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:28.129 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:28.129 23:06:43 -- spdk/autotest.sh@130 -- # uname -s 00:03:28.129 23:06:43 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:28.129 23:06:43 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:28.129 23:06:43 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.080 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.080 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.080 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.080 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.080 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.080 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.340 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.341 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.341 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:30.275 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.275 23:06:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:31.210 23:06:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:31.210 23:06:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:31.210 23:06:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:31.210 23:06:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:31.210 23:06:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:31.210 23:06:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:31.210 23:06:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:31.210 23:06:46 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:31.210 23:06:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:31.468 23:06:46 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:31.468 23:06:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:31.468 23:06:46 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.402 Waiting for block devices as requested 00:03:32.661 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:03:32.661 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:32.919 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:32.919 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:32.919 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:32.919 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.177 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.177 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.177 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:33.177 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:33.434 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:33.434 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:33.434 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:33.434 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.691 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.691 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.691 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:33.948 23:06:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:33.948 23:06:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:03:33.948 23:06:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:03:33.948 23:06:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:33.948 23:06:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:33.948 23:06:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:33.948 23:06:49 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:33.948 23:06:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:33.948 23:06:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:33.948 23:06:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:33.948 23:06:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:33.948 23:06:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:33.948 23:06:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:33.948 23:06:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:33.948 23:06:49 -- common/autotest_common.sh@1557 -- # continue 00:03:33.948 23:06:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:33.948 23:06:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:33.948 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:03:33.948 23:06:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:33.948 23:06:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.948 23:06:49 -- common/autotest_common.sh@10 -- # set +x 00:03:33.948 23:06:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.317 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.317 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.317 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.249 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.249 23:06:51 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:36.249 23:06:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:36.249 23:06:51 -- common/autotest_common.sh@10 -- # set +x 00:03:36.249 23:06:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:36.249 23:06:51 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:36.249 23:06:51 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:36.249 23:06:51 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:36.249 23:06:51 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:36.249 23:06:51 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:36.249 23:06:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:36.249 23:06:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:36.249 23:06:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:36.249 23:06:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:36.249 23:06:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:36.250 23:06:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:36.250 23:06:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:36.250 23:06:51 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:36.250 23:06:51 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:03:36.250 23:06:51 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:36.250 23:06:51 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:36.250 23:06:51 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:36.250 23:06:51 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:03:36.250 23:06:51 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:03:36.250 23:06:51 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2210461 00:03:36.250 23:06:51 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.250 23:06:51 -- common/autotest_common.sh@1598 -- # waitforlisten 2210461 00:03:36.250 23:06:51 -- common/autotest_common.sh@829 -- # '[' -z 2210461 ']' 00:03:36.250 23:06:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.250 23:06:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:36.250 23:06:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.250 23:06:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:36.250 23:06:51 -- common/autotest_common.sh@10 -- # set +x 00:03:36.508 [2024-07-15 23:06:51.564198] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:03:36.508 [2024-07-15 23:06:51.564316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210461 ] 00:03:36.508 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.508 [2024-07-15 23:06:51.622637] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.508 [2024-07-15 23:06:51.731761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.765 23:06:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:36.765 23:06:51 -- common/autotest_common.sh@862 -- # return 0 00:03:36.765 23:06:51 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:36.765 23:06:51 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:36.765 23:06:51 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:03:40.042 nvme0n1 00:03:40.042 23:06:55 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.042 [2024-07-15 23:06:55.301426] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.042 [2024-07-15 23:06:55.301478] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.042 request: 00:03:40.042 { 00:03:40.042 "nvme_ctrlr_name": "nvme0", 00:03:40.042 "password": "test", 00:03:40.042 "method": "bdev_nvme_opal_revert", 00:03:40.042 "req_id": 1 00:03:40.042 } 00:03:40.042 Got JSON-RPC error response 00:03:40.042 response: 00:03:40.042 { 00:03:40.042 "code": -32603, 00:03:40.042 "message": "Internal error" 00:03:40.042 } 00:03:40.042 23:06:55 -- common/autotest_common.sh@1604 -- # true 00:03:40.042 23:06:55 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:40.042 23:06:55 -- common/autotest_common.sh@1608 -- # killprocess 2210461 00:03:40.042 23:06:55 -- common/autotest_common.sh@948 -- # '[' -z 2210461 ']' 00:03:40.042 23:06:55 -- common/autotest_common.sh@952 -- # kill -0 2210461 00:03:40.042 23:06:55 -- common/autotest_common.sh@953 -- # uname 00:03:40.042 23:06:55 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.042 23:06:55 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2210461 00:03:40.042 23:06:55 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.042 23:06:55 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.042 23:06:55 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2210461' 00:03:40.042 killing process with pid 2210461 00:03:40.042 23:06:55 -- common/autotest_common.sh@967 -- # kill 2210461 00:03:40.042 23:06:55 -- common/autotest_common.sh@972 -- # wait 2210461 00:03:41.941 23:06:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:41.941 23:06:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:41.941 23:06:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:41.941 23:06:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:41.941 23:06:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:41.941 23:06:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.941 23:06:57 -- common/autotest_common.sh@10 -- # set +x 00:03:41.941 23:06:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:41.941 23:06:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:41.941 23:06:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.941 23:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.941 23:06:57 -- common/autotest_common.sh@10 -- # set +x 00:03:41.941 ************************************ 00:03:41.941 START TEST env 00:03:41.941 ************************************ 00:03:41.941 23:06:57 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:41.941 * Looking for test storage... 00:03:41.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:41.941 23:06:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:41.941 23:06:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.941 23:06:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.941 23:06:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.199 ************************************ 00:03:42.199 START TEST env_memory 00:03:42.199 ************************************ 00:03:42.199 23:06:57 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.199 00:03:42.199 00:03:42.199 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.199 http://cunit.sourceforge.net/ 00:03:42.199 00:03:42.199 00:03:42.199 Suite: memory 00:03:42.199 Test: alloc and free memory map ...[2024-07-15 23:06:57.291551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.199 passed 00:03:42.199 Test: mem map translation ...[2024-07-15 23:06:57.311550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.199 [2024-07-15 23:06:57.311570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.199 [2024-07-15 23:06:57.311622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.199 [2024-07-15 23:06:57.311634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.199 passed 00:03:42.199 Test: mem map registration ...[2024-07-15 23:06:57.352239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.199 [2024-07-15 23:06:57.352258] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.199 passed 00:03:42.199 Test: mem map adjacent registrations ...passed 00:03:42.199 00:03:42.199 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.199 suites 1 1 n/a 0 0 00:03:42.199 tests 4 4 4 0 0 00:03:42.199 asserts 152 152 152 0 n/a 00:03:42.199 00:03:42.199 Elapsed time = 0.145 seconds 00:03:42.199 00:03:42.199 real 0m0.154s 00:03:42.199 user 0m0.146s 00:03:42.199 sys 0m0.007s 00:03:42.199 23:06:57 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.199 23:06:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.199 ************************************ 00:03:42.199 END TEST env_memory 00:03:42.199 ************************************ 00:03:42.199 23:06:57 env -- common/autotest_common.sh@1142 -- # return 0 00:03:42.199 23:06:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.199 23:06:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.199 23:06:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.199 23:06:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.199 ************************************ 00:03:42.199 START TEST env_vtophys 00:03:42.199 ************************************ 00:03:42.200 23:06:57 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.200 EAL: lib.eal log level changed from notice to debug 00:03:42.200 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.200 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.200 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.200 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.200 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.200 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.200 EAL: Detected lcore 6 as core 8 on socket 0 00:03:42.200 EAL: Detected lcore 7 as core 9 on socket 0 00:03:42.200 EAL: Detected lcore 8 as core 10 on socket 0 00:03:42.200 EAL: Detected lcore 9 as core 11 on socket 0 00:03:42.200 EAL: Detected lcore 10 as core 12 on socket 0 00:03:42.200 EAL: Detected lcore 11 as core 13 on socket 0 00:03:42.200 EAL: Detected lcore 12 as core 0 on socket 1 00:03:42.200 EAL: Detected lcore 13 as core 1 on socket 1 00:03:42.200 EAL: Detected lcore 14 as core 2 on socket 1 00:03:42.200 EAL: Detected lcore 15 as core 3 on socket 1 00:03:42.200 EAL: Detected lcore 16 as core 4 on socket 1 00:03:42.200 EAL: Detected lcore 17 as core 5 on socket 1 00:03:42.200 EAL: Detected lcore 18 as core 8 on socket 1 00:03:42.200 EAL: Detected lcore 19 as core 9 on socket 1 00:03:42.200 EAL: Detected lcore 20 as core 10 on socket 1 00:03:42.200 EAL: Detected lcore 21 as core 11 on socket 1 00:03:42.200 EAL: Detected lcore 22 as core 12 on socket 1 00:03:42.200 EAL: Detected lcore 23 as core 13 on socket 1 00:03:42.200 EAL: Detected lcore 24 as core 0 on socket 0 00:03:42.200 EAL: Detected lcore 25 as core 1 on socket 0 00:03:42.200 EAL: Detected lcore 26 as core 2 on socket 0 00:03:42.200 EAL: Detected lcore 27 as core 3 on socket 0 00:03:42.200 EAL: Detected lcore 28 as core 4 on socket 0 00:03:42.200 EAL: Detected lcore 29 as core 5 on socket 0 00:03:42.200 EAL: Detected lcore 30 as core 8 on socket 0 00:03:42.200 EAL: Detected lcore 31 as core 9 on socket 0 00:03:42.200 EAL: Detected lcore 32 as core 10 on socket 0 00:03:42.200 EAL: Detected lcore 33 as core 11 on socket 0 00:03:42.200 EAL: Detected lcore 34 as core 12 on socket 0 00:03:42.200 EAL: Detected lcore 35 as core 13 on socket 0 00:03:42.200 EAL: Detected lcore 36 as core 0 on socket 1 00:03:42.200 EAL: Detected lcore 37 as core 1 on socket 1 00:03:42.200 EAL: Detected lcore 38 as core 2 on socket 1 00:03:42.200 EAL: Detected lcore 39 as core 3 on socket 1 00:03:42.200 EAL: Detected lcore 40 as core 4 on socket 1 00:03:42.200 EAL: Detected lcore 41 as core 5 on socket 1 00:03:42.200 EAL: Detected lcore 42 as core 8 on socket 1 00:03:42.200 EAL: Detected lcore 43 as core 9 on socket 1 00:03:42.200 EAL: Detected lcore 44 as core 10 on socket 1 00:03:42.200 EAL: Detected lcore 45 as core 11 on socket 1 00:03:42.200 EAL: Detected lcore 46 as core 12 on socket 1 00:03:42.200 EAL: Detected lcore 47 as core 13 on socket 1 00:03:42.200 EAL: Maximum logical cores by configuration: 128 00:03:42.200 EAL: Detected CPU lcores: 48 00:03:42.200 EAL: Detected NUMA nodes: 2 00:03:42.200 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:42.200 EAL: Detected shared linkage of DPDK 00:03:42.200 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.200 EAL: Bus pci wants IOVA as 'DC' 00:03:42.200 EAL: Buses did not request a specific IOVA mode. 00:03:42.200 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:42.200 EAL: Selected IOVA mode 'VA' 00:03:42.200 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.200 EAL: Probing VFIO support... 00:03:42.200 EAL: IOMMU type 1 (Type 1) is supported 00:03:42.200 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:42.200 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:42.200 EAL: VFIO support initialized 00:03:42.200 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.200 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.200 EAL: Setting up physically contiguous memory... 00:03:42.200 EAL: Setting maximum number of open files to 524288 00:03:42.200 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.200 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:42.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:42.200 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.200 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:42.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.200 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.200 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:42.200 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:42.200 EAL: Hugepages will be freed exactly as allocated. 00:03:42.200 EAL: No shared files mode enabled, IPC is disabled 00:03:42.200 EAL: No shared files mode enabled, IPC is disabled 00:03:42.200 EAL: TSC frequency is ~2700000 KHz 00:03:42.200 EAL: Main lcore 0 is ready (tid=7f808b78aa00;cpuset=[0]) 00:03:42.200 EAL: Trying to obtain current memory policy. 00:03:42.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.200 EAL: Restoring previous memory policy: 0 00:03:42.200 EAL: request: mp_malloc_sync 00:03:42.200 EAL: No shared files mode enabled, IPC is disabled 00:03:42.200 EAL: Heap on socket 0 was expanded by 2MB 00:03:42.200 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:42.458 EAL: Mem event callback 'spdk:(nil)' registered 00:03:42.458 00:03:42.458 00:03:42.458 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.458 http://cunit.sourceforge.net/ 00:03:42.458 00:03:42.458 00:03:42.458 Suite: components_suite 00:03:42.458 Test: vtophys_malloc_test ...passed 00:03:42.458 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 4MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 4MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 6MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 6MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 10MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 10MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 18MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 18MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 34MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 34MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 66MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 66MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 130MB 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was shrunk by 130MB 00:03:42.458 EAL: Trying to obtain current memory policy. 00:03:42.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.458 EAL: Restoring previous memory policy: 4 00:03:42.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.458 EAL: request: mp_malloc_sync 00:03:42.458 EAL: No shared files mode enabled, IPC is disabled 00:03:42.458 EAL: Heap on socket 0 was expanded by 258MB 00:03:42.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.716 EAL: request: mp_malloc_sync 00:03:42.716 EAL: No shared files mode enabled, IPC is disabled 00:03:42.716 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.716 EAL: Trying to obtain current memory policy. 00:03:42.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.716 EAL: Restoring previous memory policy: 4 00:03:42.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.716 EAL: request: mp_malloc_sync 00:03:42.716 EAL: No shared files mode enabled, IPC is disabled 00:03:42.716 EAL: Heap on socket 0 was expanded by 514MB 00:03:42.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.973 EAL: request: mp_malloc_sync 00:03:42.973 EAL: No shared files mode enabled, IPC is disabled 00:03:42.973 EAL: Heap on socket 0 was shrunk by 514MB 00:03:42.973 EAL: Trying to obtain current memory policy. 00:03:42.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.231 EAL: Restoring previous memory policy: 4 00:03:43.231 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.231 EAL: request: mp_malloc_sync 00:03:43.232 EAL: No shared files mode enabled, IPC is disabled 00:03:43.232 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.747 EAL: request: mp_malloc_sync 00:03:43.747 EAL: No shared files mode enabled, IPC is disabled 00:03:43.747 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:43.747 passed 00:03:43.747 00:03:43.747 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.747 suites 1 1 n/a 0 0 00:03:43.747 tests 2 2 2 0 0 00:03:43.747 asserts 497 497 497 0 n/a 00:03:43.747 00:03:43.747 Elapsed time = 1.372 seconds 00:03:43.747 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.747 EAL: request: mp_malloc_sync 00:03:43.747 EAL: No shared files mode enabled, IPC is disabled 00:03:43.747 EAL: Heap on socket 0 was shrunk by 2MB 00:03:43.747 EAL: No shared files mode enabled, IPC is disabled 00:03:43.747 EAL: No shared files mode enabled, IPC is disabled 00:03:43.747 EAL: No shared files mode enabled, IPC is disabled 00:03:43.747 00:03:43.747 real 0m1.494s 00:03:43.747 user 0m0.856s 00:03:43.747 sys 0m0.604s 00:03:43.747 23:06:58 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.747 23:06:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:43.747 ************************************ 00:03:43.747 END TEST env_vtophys 00:03:43.747 ************************************ 00:03:43.747 23:06:58 env -- common/autotest_common.sh@1142 -- # return 0 00:03:43.747 23:06:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:43.747 23:06:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.747 23:06:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.747 23:06:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.747 ************************************ 00:03:43.747 START TEST env_pci 00:03:43.747 ************************************ 00:03:43.747 23:06:58 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:43.747 00:03:43.747 00:03:43.747 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.747 http://cunit.sourceforge.net/ 00:03:43.747 00:03:43.747 00:03:43.747 Suite: pci 00:03:43.747 Test: pci_hook ...[2024-07-15 23:06:59.009422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2211354 has claimed it 00:03:43.747 EAL: Cannot find device (10000:00:01.0) 00:03:43.747 EAL: Failed to attach device on primary process 00:03:43.747 passed 00:03:43.747 00:03:43.747 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.747 suites 1 1 n/a 0 0 00:03:43.747 tests 1 1 1 0 0 00:03:43.747 asserts 25 25 25 0 n/a 00:03:43.747 00:03:43.747 Elapsed time = 0.021 seconds 00:03:43.747 00:03:43.747 real 0m0.033s 00:03:43.747 user 0m0.010s 00:03:43.747 sys 0m0.023s 00:03:43.747 23:06:59 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.747 23:06:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:43.747 ************************************ 00:03:43.747 END TEST env_pci 00:03:43.747 ************************************ 00:03:43.747 23:06:59 env -- common/autotest_common.sh@1142 -- # return 0 00:03:43.747 23:06:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:43.747 23:06:59 env -- env/env.sh@15 -- # uname 00:03:43.747 23:06:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:43.747 23:06:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:43.747 23:06:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:43.747 23:06:59 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:43.747 23:06:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.747 23:06:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.005 ************************************ 00:03:44.005 START TEST env_dpdk_post_init 00:03:44.005 ************************************ 00:03:44.005 23:06:59 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.005 EAL: Detected CPU lcores: 48 00:03:44.005 EAL: Detected NUMA nodes: 2 00:03:44.005 EAL: Detected shared linkage of DPDK 00:03:44.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.005 EAL: Selected IOVA mode 'VA' 00:03:44.005 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.005 EAL: VFIO support initialized 00:03:44.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.005 EAL: Using IOMMU type 1 (Type 1) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:44.005 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:44.262 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:44.262 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:44.262 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:44.262 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:44.262 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:44.828 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:03:48.106 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:03:48.106 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:03:48.363 Starting DPDK initialization... 00:03:48.363 Starting SPDK post initialization... 00:03:48.363 SPDK NVMe probe 00:03:48.363 Attaching to 0000:82:00.0 00:03:48.363 Attached to 0000:82:00.0 00:03:48.363 Cleaning up... 00:03:48.363 00:03:48.363 real 0m4.403s 00:03:48.363 user 0m3.264s 00:03:48.363 sys 0m0.194s 00:03:48.363 23:07:03 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.363 23:07:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.363 ************************************ 00:03:48.363 END TEST env_dpdk_post_init 00:03:48.363 ************************************ 00:03:48.363 23:07:03 env -- common/autotest_common.sh@1142 -- # return 0 00:03:48.363 23:07:03 env -- env/env.sh@26 -- # uname 00:03:48.363 23:07:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.363 23:07:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.363 23:07:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.363 23:07:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.363 23:07:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.363 ************************************ 00:03:48.363 START TEST env_mem_callbacks 00:03:48.363 ************************************ 00:03:48.363 23:07:03 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.363 EAL: Detected CPU lcores: 48 00:03:48.363 EAL: Detected NUMA nodes: 2 00:03:48.363 EAL: Detected shared linkage of DPDK 00:03:48.363 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.363 EAL: Selected IOVA mode 'VA' 00:03:48.363 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.363 EAL: VFIO support initialized 00:03:48.363 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.363 00:03:48.363 00:03:48.363 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.363 http://cunit.sourceforge.net/ 00:03:48.363 00:03:48.363 00:03:48.363 Suite: memory 00:03:48.363 Test: test ... 00:03:48.363 register 0x200000200000 2097152 00:03:48.363 malloc 3145728 00:03:48.363 register 0x200000400000 4194304 00:03:48.363 buf 0x200000500000 len 3145728 PASSED 00:03:48.363 malloc 64 00:03:48.363 buf 0x2000004fff40 len 64 PASSED 00:03:48.363 malloc 4194304 00:03:48.363 register 0x200000800000 6291456 00:03:48.363 buf 0x200000a00000 len 4194304 PASSED 00:03:48.363 free 0x200000500000 3145728 00:03:48.363 free 0x2000004fff40 64 00:03:48.363 unregister 0x200000400000 4194304 PASSED 00:03:48.363 free 0x200000a00000 4194304 00:03:48.363 unregister 0x200000800000 6291456 PASSED 00:03:48.363 malloc 8388608 00:03:48.363 register 0x200000400000 10485760 00:03:48.363 buf 0x200000600000 len 8388608 PASSED 00:03:48.363 free 0x200000600000 8388608 00:03:48.363 unregister 0x200000400000 10485760 PASSED 00:03:48.363 passed 00:03:48.363 00:03:48.363 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.363 suites 1 1 n/a 0 0 00:03:48.363 tests 1 1 1 0 0 00:03:48.363 asserts 15 15 15 0 n/a 00:03:48.363 00:03:48.363 Elapsed time = 0.005 seconds 00:03:48.363 00:03:48.363 real 0m0.050s 00:03:48.363 user 0m0.018s 00:03:48.363 sys 0m0.032s 00:03:48.363 23:07:03 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.363 23:07:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.363 ************************************ 00:03:48.363 END TEST env_mem_callbacks 00:03:48.363 ************************************ 00:03:48.363 23:07:03 env -- common/autotest_common.sh@1142 -- # return 0 00:03:48.363 00:03:48.363 real 0m6.417s 00:03:48.363 user 0m4.418s 00:03:48.363 sys 0m1.038s 00:03:48.363 23:07:03 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.363 23:07:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.363 ************************************ 00:03:48.363 END TEST env 00:03:48.363 ************************************ 00:03:48.363 23:07:03 -- common/autotest_common.sh@1142 -- # return 0 00:03:48.363 23:07:03 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.363 23:07:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.363 23:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.363 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.363 ************************************ 00:03:48.363 START TEST rpc 00:03:48.363 ************************************ 00:03:48.363 23:07:03 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.621 * Looking for test storage... 00:03:48.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.621 23:07:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2212009 00:03:48.621 23:07:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:48.621 23:07:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.621 23:07:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2212009 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@829 -- # '[' -z 2212009 ']' 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.621 23:07:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.621 [2024-07-15 23:07:03.750613] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:03:48.621 [2024-07-15 23:07:03.750707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212009 ] 00:03:48.621 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.621 [2024-07-15 23:07:03.809808] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.621 [2024-07-15 23:07:03.918242] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.621 [2024-07-15 23:07:03.918294] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2212009' to capture a snapshot of events at runtime. 00:03:48.621 [2024-07-15 23:07:03.918322] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.621 [2024-07-15 23:07:03.918334] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.621 [2024-07-15 23:07:03.918343] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2212009 for offline analysis/debug. 00:03:48.621 [2024-07-15 23:07:03.918370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.879 23:07:04 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:48.879 23:07:04 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:48.879 23:07:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.879 23:07:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.879 23:07:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:48.879 23:07:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:48.879 23:07:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.879 23:07:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.879 23:07:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 ************************************ 00:03:49.137 START TEST rpc_integrity 00:03:49.137 ************************************ 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.137 { 00:03:49.137 "name": "Malloc0", 00:03:49.137 "aliases": [ 00:03:49.137 "b13a3411-2388-48b1-a340-bbb505e1364c" 00:03:49.137 ], 00:03:49.137 "product_name": "Malloc disk", 00:03:49.137 "block_size": 512, 00:03:49.137 "num_blocks": 16384, 00:03:49.137 "uuid": "b13a3411-2388-48b1-a340-bbb505e1364c", 00:03:49.137 "assigned_rate_limits": { 00:03:49.137 "rw_ios_per_sec": 0, 00:03:49.137 "rw_mbytes_per_sec": 0, 00:03:49.137 "r_mbytes_per_sec": 0, 00:03:49.137 "w_mbytes_per_sec": 0 00:03:49.137 }, 00:03:49.137 "claimed": false, 00:03:49.137 "zoned": false, 00:03:49.137 "supported_io_types": { 00:03:49.137 "read": true, 00:03:49.137 "write": true, 00:03:49.137 "unmap": true, 00:03:49.137 "flush": true, 00:03:49.137 "reset": true, 00:03:49.137 "nvme_admin": false, 00:03:49.137 "nvme_io": false, 00:03:49.137 "nvme_io_md": false, 00:03:49.137 "write_zeroes": true, 00:03:49.137 "zcopy": true, 00:03:49.137 "get_zone_info": false, 00:03:49.137 "zone_management": false, 00:03:49.137 "zone_append": false, 00:03:49.137 "compare": false, 00:03:49.137 "compare_and_write": false, 00:03:49.137 "abort": true, 00:03:49.137 "seek_hole": false, 00:03:49.137 "seek_data": false, 00:03:49.137 "copy": true, 00:03:49.137 "nvme_iov_md": false 00:03:49.137 }, 00:03:49.137 "memory_domains": [ 00:03:49.137 { 00:03:49.137 "dma_device_id": "system", 00:03:49.137 "dma_device_type": 1 00:03:49.137 }, 00:03:49.137 { 00:03:49.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.137 "dma_device_type": 2 00:03:49.137 } 00:03:49.137 ], 00:03:49.137 "driver_specific": {} 00:03:49.137 } 00:03:49.137 ]' 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 [2024-07-15 23:07:04.318153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.137 [2024-07-15 23:07:04.318197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.137 [2024-07-15 23:07:04.318221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1179540 00:03:49.137 [2024-07-15 23:07:04.318242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.137 [2024-07-15 23:07:04.319723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.137 [2024-07-15 23:07:04.319760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.137 Passthru0 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.137 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.137 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.137 { 00:03:49.137 "name": "Malloc0", 00:03:49.137 "aliases": [ 00:03:49.137 "b13a3411-2388-48b1-a340-bbb505e1364c" 00:03:49.137 ], 00:03:49.137 "product_name": "Malloc disk", 00:03:49.137 "block_size": 512, 00:03:49.137 "num_blocks": 16384, 00:03:49.137 "uuid": "b13a3411-2388-48b1-a340-bbb505e1364c", 00:03:49.137 "assigned_rate_limits": { 00:03:49.137 "rw_ios_per_sec": 0, 00:03:49.137 "rw_mbytes_per_sec": 0, 00:03:49.137 "r_mbytes_per_sec": 0, 00:03:49.137 "w_mbytes_per_sec": 0 00:03:49.137 }, 00:03:49.137 "claimed": true, 00:03:49.137 "claim_type": "exclusive_write", 00:03:49.137 "zoned": false, 00:03:49.137 "supported_io_types": { 00:03:49.137 "read": true, 00:03:49.137 "write": true, 00:03:49.137 "unmap": true, 00:03:49.137 "flush": true, 00:03:49.137 "reset": true, 00:03:49.138 "nvme_admin": false, 00:03:49.138 "nvme_io": false, 00:03:49.138 "nvme_io_md": false, 00:03:49.138 "write_zeroes": true, 00:03:49.138 "zcopy": true, 00:03:49.138 "get_zone_info": false, 00:03:49.138 "zone_management": false, 00:03:49.138 "zone_append": false, 00:03:49.138 "compare": false, 00:03:49.138 "compare_and_write": false, 00:03:49.138 "abort": true, 00:03:49.138 "seek_hole": false, 00:03:49.138 "seek_data": false, 00:03:49.138 "copy": true, 00:03:49.138 "nvme_iov_md": false 00:03:49.138 }, 00:03:49.138 "memory_domains": [ 00:03:49.138 { 00:03:49.138 "dma_device_id": "system", 00:03:49.138 "dma_device_type": 1 00:03:49.138 }, 00:03:49.138 { 00:03:49.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.138 "dma_device_type": 2 00:03:49.138 } 00:03:49.138 ], 00:03:49.138 "driver_specific": {} 00:03:49.138 }, 00:03:49.138 { 00:03:49.138 "name": "Passthru0", 00:03:49.138 "aliases": [ 00:03:49.138 "eb646125-4993-5d32-8bbb-d3c423ae3ec0" 00:03:49.138 ], 00:03:49.138 "product_name": "passthru", 00:03:49.138 "block_size": 512, 00:03:49.138 "num_blocks": 16384, 00:03:49.138 "uuid": "eb646125-4993-5d32-8bbb-d3c423ae3ec0", 00:03:49.138 "assigned_rate_limits": { 00:03:49.138 "rw_ios_per_sec": 0, 00:03:49.138 "rw_mbytes_per_sec": 0, 00:03:49.138 "r_mbytes_per_sec": 0, 00:03:49.138 "w_mbytes_per_sec": 0 00:03:49.138 }, 00:03:49.138 "claimed": false, 00:03:49.138 "zoned": false, 00:03:49.138 "supported_io_types": { 00:03:49.138 "read": true, 00:03:49.138 "write": true, 00:03:49.138 "unmap": true, 00:03:49.138 "flush": true, 00:03:49.138 "reset": true, 00:03:49.138 "nvme_admin": false, 00:03:49.138 "nvme_io": false, 00:03:49.138 "nvme_io_md": false, 00:03:49.138 "write_zeroes": true, 00:03:49.138 "zcopy": true, 00:03:49.138 "get_zone_info": false, 00:03:49.138 "zone_management": false, 00:03:49.138 "zone_append": false, 00:03:49.138 "compare": false, 00:03:49.138 "compare_and_write": false, 00:03:49.138 "abort": true, 00:03:49.138 "seek_hole": false, 00:03:49.138 "seek_data": false, 00:03:49.138 "copy": true, 00:03:49.138 "nvme_iov_md": false 00:03:49.138 }, 00:03:49.138 "memory_domains": [ 00:03:49.138 { 00:03:49.138 "dma_device_id": "system", 00:03:49.138 "dma_device_type": 1 00:03:49.138 }, 00:03:49.138 { 00:03:49.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.138 "dma_device_type": 2 00:03:49.138 } 00:03:49.138 ], 00:03:49.138 "driver_specific": { 00:03:49.138 "passthru": { 00:03:49.138 "name": "Passthru0", 00:03:49.138 "base_bdev_name": "Malloc0" 00:03:49.138 } 00:03:49.138 } 00:03:49.138 } 00:03:49.138 ]' 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.138 23:07:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.138 00:03:49.138 real 0m0.229s 00:03:49.138 user 0m0.155s 00:03:49.138 sys 0m0.018s 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.138 23:07:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.138 ************************************ 00:03:49.138 END TEST rpc_integrity 00:03:49.138 ************************************ 00:03:49.395 23:07:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.395 23:07:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.395 23:07:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.395 23:07:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.395 23:07:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.395 ************************************ 00:03:49.395 START TEST rpc_plugins 00:03:49.395 ************************************ 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.395 { 00:03:49.395 "name": "Malloc1", 00:03:49.395 "aliases": [ 00:03:49.395 "1293a805-c773-45e9-b167-e8d297a775de" 00:03:49.395 ], 00:03:49.395 "product_name": "Malloc disk", 00:03:49.395 "block_size": 4096, 00:03:49.395 "num_blocks": 256, 00:03:49.395 "uuid": "1293a805-c773-45e9-b167-e8d297a775de", 00:03:49.395 "assigned_rate_limits": { 00:03:49.395 "rw_ios_per_sec": 0, 00:03:49.395 "rw_mbytes_per_sec": 0, 00:03:49.395 "r_mbytes_per_sec": 0, 00:03:49.395 "w_mbytes_per_sec": 0 00:03:49.395 }, 00:03:49.395 "claimed": false, 00:03:49.395 "zoned": false, 00:03:49.395 "supported_io_types": { 00:03:49.395 "read": true, 00:03:49.395 "write": true, 00:03:49.395 "unmap": true, 00:03:49.395 "flush": true, 00:03:49.395 "reset": true, 00:03:49.395 "nvme_admin": false, 00:03:49.395 "nvme_io": false, 00:03:49.395 "nvme_io_md": false, 00:03:49.395 "write_zeroes": true, 00:03:49.395 "zcopy": true, 00:03:49.395 "get_zone_info": false, 00:03:49.395 "zone_management": false, 00:03:49.395 "zone_append": false, 00:03:49.395 "compare": false, 00:03:49.395 "compare_and_write": false, 00:03:49.395 "abort": true, 00:03:49.395 "seek_hole": false, 00:03:49.395 "seek_data": false, 00:03:49.395 "copy": true, 00:03:49.395 "nvme_iov_md": false 00:03:49.395 }, 00:03:49.395 "memory_domains": [ 00:03:49.395 { 00:03:49.395 "dma_device_id": "system", 00:03:49.395 "dma_device_type": 1 00:03:49.395 }, 00:03:49.395 { 00:03:49.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.395 "dma_device_type": 2 00:03:49.395 } 00:03:49.395 ], 00:03:49.395 "driver_specific": {} 00:03:49.395 } 00:03:49.395 ]' 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.395 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.395 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.396 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.396 23:07:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.396 00:03:49.396 real 0m0.116s 00:03:49.396 user 0m0.077s 00:03:49.396 sys 0m0.010s 00:03:49.396 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.396 23:07:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.396 ************************************ 00:03:49.396 END TEST rpc_plugins 00:03:49.396 ************************************ 00:03:49.396 23:07:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.396 23:07:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.396 23:07:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.396 23:07:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.396 23:07:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.396 ************************************ 00:03:49.396 START TEST rpc_trace_cmd_test 00:03:49.396 ************************************ 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.396 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2212009", 00:03:49.396 "tpoint_group_mask": "0x8", 00:03:49.396 "iscsi_conn": { 00:03:49.396 "mask": "0x2", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "scsi": { 00:03:49.396 "mask": "0x4", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "bdev": { 00:03:49.396 "mask": "0x8", 00:03:49.396 "tpoint_mask": "0xffffffffffffffff" 00:03:49.396 }, 00:03:49.396 "nvmf_rdma": { 00:03:49.396 "mask": "0x10", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "nvmf_tcp": { 00:03:49.396 "mask": "0x20", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "ftl": { 00:03:49.396 "mask": "0x40", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "blobfs": { 00:03:49.396 "mask": "0x80", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "dsa": { 00:03:49.396 "mask": "0x200", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "thread": { 00:03:49.396 "mask": "0x400", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "nvme_pcie": { 00:03:49.396 "mask": "0x800", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "iaa": { 00:03:49.396 "mask": "0x1000", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "nvme_tcp": { 00:03:49.396 "mask": "0x2000", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "bdev_nvme": { 00:03:49.396 "mask": "0x4000", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 }, 00:03:49.396 "sock": { 00:03:49.396 "mask": "0x8000", 00:03:49.396 "tpoint_mask": "0x0" 00:03:49.396 } 00:03:49.396 }' 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:49.396 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:49.654 00:03:49.654 real 0m0.194s 00:03:49.654 user 0m0.174s 00:03:49.654 sys 0m0.014s 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.654 23:07:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.654 ************************************ 00:03:49.654 END TEST rpc_trace_cmd_test 00:03:49.654 ************************************ 00:03:49.654 23:07:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.654 23:07:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:49.654 23:07:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:49.654 23:07:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:49.654 23:07:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.654 23:07:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.654 23:07:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.654 ************************************ 00:03:49.654 START TEST rpc_daemon_integrity 00:03:49.654 ************************************ 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.654 { 00:03:49.654 "name": "Malloc2", 00:03:49.654 "aliases": [ 00:03:49.654 "bad63f1c-b814-4e52-a57b-2a7690ae944d" 00:03:49.654 ], 00:03:49.654 "product_name": "Malloc disk", 00:03:49.654 "block_size": 512, 00:03:49.654 "num_blocks": 16384, 00:03:49.654 "uuid": "bad63f1c-b814-4e52-a57b-2a7690ae944d", 00:03:49.654 "assigned_rate_limits": { 00:03:49.654 "rw_ios_per_sec": 0, 00:03:49.654 "rw_mbytes_per_sec": 0, 00:03:49.654 "r_mbytes_per_sec": 0, 00:03:49.654 "w_mbytes_per_sec": 0 00:03:49.654 }, 00:03:49.654 "claimed": false, 00:03:49.654 "zoned": false, 00:03:49.654 "supported_io_types": { 00:03:49.654 "read": true, 00:03:49.654 "write": true, 00:03:49.654 "unmap": true, 00:03:49.654 "flush": true, 00:03:49.654 "reset": true, 00:03:49.654 "nvme_admin": false, 00:03:49.654 "nvme_io": false, 00:03:49.654 "nvme_io_md": false, 00:03:49.654 "write_zeroes": true, 00:03:49.654 "zcopy": true, 00:03:49.654 "get_zone_info": false, 00:03:49.654 "zone_management": false, 00:03:49.654 "zone_append": false, 00:03:49.654 "compare": false, 00:03:49.654 "compare_and_write": false, 00:03:49.654 "abort": true, 00:03:49.654 "seek_hole": false, 00:03:49.654 "seek_data": false, 00:03:49.654 "copy": true, 00:03:49.654 "nvme_iov_md": false 00:03:49.654 }, 00:03:49.654 "memory_domains": [ 00:03:49.654 { 00:03:49.654 "dma_device_id": "system", 00:03:49.654 "dma_device_type": 1 00:03:49.654 }, 00:03:49.654 { 00:03:49.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.654 "dma_device_type": 2 00:03:49.654 } 00:03:49.654 ], 00:03:49.654 "driver_specific": {} 00:03:49.654 } 00:03:49.654 ]' 00:03:49.654 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.911 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 [2024-07-15 23:07:04.988173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:49.912 [2024-07-15 23:07:04.988217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.912 [2024-07-15 23:07:04.988243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1311610 00:03:49.912 [2024-07-15 23:07:04.988258] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.912 [2024-07-15 23:07:04.989489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.912 [2024-07-15 23:07:04.989511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.912 Passthru0 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.912 23:07:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.912 { 00:03:49.912 "name": "Malloc2", 00:03:49.912 "aliases": [ 00:03:49.912 "bad63f1c-b814-4e52-a57b-2a7690ae944d" 00:03:49.912 ], 00:03:49.912 "product_name": "Malloc disk", 00:03:49.912 "block_size": 512, 00:03:49.912 "num_blocks": 16384, 00:03:49.912 "uuid": "bad63f1c-b814-4e52-a57b-2a7690ae944d", 00:03:49.912 "assigned_rate_limits": { 00:03:49.912 "rw_ios_per_sec": 0, 00:03:49.912 "rw_mbytes_per_sec": 0, 00:03:49.912 "r_mbytes_per_sec": 0, 00:03:49.912 "w_mbytes_per_sec": 0 00:03:49.912 }, 00:03:49.912 "claimed": true, 00:03:49.912 "claim_type": "exclusive_write", 00:03:49.912 "zoned": false, 00:03:49.912 "supported_io_types": { 00:03:49.912 "read": true, 00:03:49.912 "write": true, 00:03:49.912 "unmap": true, 00:03:49.912 "flush": true, 00:03:49.912 "reset": true, 00:03:49.912 "nvme_admin": false, 00:03:49.912 "nvme_io": false, 00:03:49.912 "nvme_io_md": false, 00:03:49.912 "write_zeroes": true, 00:03:49.912 "zcopy": true, 00:03:49.912 "get_zone_info": false, 00:03:49.912 "zone_management": false, 00:03:49.912 "zone_append": false, 00:03:49.912 "compare": false, 00:03:49.912 "compare_and_write": false, 00:03:49.912 "abort": true, 00:03:49.912 "seek_hole": false, 00:03:49.912 "seek_data": false, 00:03:49.912 "copy": true, 00:03:49.912 "nvme_iov_md": false 00:03:49.912 }, 00:03:49.912 "memory_domains": [ 00:03:49.912 { 00:03:49.912 "dma_device_id": "system", 00:03:49.912 "dma_device_type": 1 00:03:49.912 }, 00:03:49.912 { 00:03:49.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.912 "dma_device_type": 2 00:03:49.912 } 00:03:49.912 ], 00:03:49.912 "driver_specific": {} 00:03:49.912 }, 00:03:49.912 { 00:03:49.912 "name": "Passthru0", 00:03:49.912 "aliases": [ 00:03:49.912 "38ab9113-bb7d-5adc-9e49-d3007f95f6e2" 00:03:49.912 ], 00:03:49.912 "product_name": "passthru", 00:03:49.912 "block_size": 512, 00:03:49.912 "num_blocks": 16384, 00:03:49.912 "uuid": "38ab9113-bb7d-5adc-9e49-d3007f95f6e2", 00:03:49.912 "assigned_rate_limits": { 00:03:49.912 "rw_ios_per_sec": 0, 00:03:49.912 "rw_mbytes_per_sec": 0, 00:03:49.912 "r_mbytes_per_sec": 0, 00:03:49.912 "w_mbytes_per_sec": 0 00:03:49.912 }, 00:03:49.912 "claimed": false, 00:03:49.912 "zoned": false, 00:03:49.912 "supported_io_types": { 00:03:49.912 "read": true, 00:03:49.912 "write": true, 00:03:49.912 "unmap": true, 00:03:49.912 "flush": true, 00:03:49.912 "reset": true, 00:03:49.912 "nvme_admin": false, 00:03:49.912 "nvme_io": false, 00:03:49.912 "nvme_io_md": false, 00:03:49.912 "write_zeroes": true, 00:03:49.912 "zcopy": true, 00:03:49.912 "get_zone_info": false, 00:03:49.912 "zone_management": false, 00:03:49.912 "zone_append": false, 00:03:49.912 "compare": false, 00:03:49.912 "compare_and_write": false, 00:03:49.912 "abort": true, 00:03:49.912 "seek_hole": false, 00:03:49.912 "seek_data": false, 00:03:49.912 "copy": true, 00:03:49.912 "nvme_iov_md": false 00:03:49.912 }, 00:03:49.912 "memory_domains": [ 00:03:49.912 { 00:03:49.912 "dma_device_id": "system", 00:03:49.912 "dma_device_type": 1 00:03:49.912 }, 00:03:49.912 { 00:03:49.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.912 "dma_device_type": 2 00:03:49.912 } 00:03:49.912 ], 00:03:49.912 "driver_specific": { 00:03:49.912 "passthru": { 00:03:49.912 "name": "Passthru0", 00:03:49.912 "base_bdev_name": "Malloc2" 00:03:49.912 } 00:03:49.912 } 00:03:49.912 } 00:03:49.912 ]' 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.912 00:03:49.912 real 0m0.227s 00:03:49.912 user 0m0.154s 00:03:49.912 sys 0m0.018s 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.912 23:07:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.912 ************************************ 00:03:49.912 END TEST rpc_daemon_integrity 00:03:49.912 ************************************ 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:49.912 23:07:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:49.912 23:07:05 rpc -- rpc/rpc.sh@84 -- # killprocess 2212009 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@948 -- # '[' -z 2212009 ']' 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@952 -- # kill -0 2212009 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@953 -- # uname 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212009 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212009' 00:03:49.912 killing process with pid 2212009 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@967 -- # kill 2212009 00:03:49.912 23:07:05 rpc -- common/autotest_common.sh@972 -- # wait 2212009 00:03:50.506 00:03:50.506 real 0m1.967s 00:03:50.506 user 0m2.447s 00:03:50.506 sys 0m0.582s 00:03:50.506 23:07:05 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.506 23:07:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 END TEST rpc 00:03:50.506 ************************************ 00:03:50.506 23:07:05 -- common/autotest_common.sh@1142 -- # return 0 00:03:50.506 23:07:05 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.506 23:07:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.506 23:07:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.506 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 START TEST skip_rpc 00:03:50.506 ************************************ 00:03:50.506 23:07:05 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.506 * Looking for test storage... 00:03:50.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.506 23:07:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.506 23:07:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:50.506 23:07:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:50.506 23:07:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.506 23:07:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.506 23:07:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 START TEST skip_rpc 00:03:50.506 ************************************ 00:03:50.506 23:07:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:50.506 23:07:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2212450 00:03:50.506 23:07:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:50.506 23:07:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.506 23:07:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:50.506 [2024-07-15 23:07:05.794094] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:03:50.506 [2024-07-15 23:07:05.794170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212450 ] 00:03:50.507 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.765 [2024-07-15 23:07:05.854872] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.765 [2024-07-15 23:07:05.971566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2212450 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2212450 ']' 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2212450 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212450 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212450' 00:03:56.023 killing process with pid 2212450 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2212450 00:03:56.023 23:07:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2212450 00:03:56.023 00:03:56.023 real 0m5.497s 00:03:56.023 user 0m5.168s 00:03:56.023 sys 0m0.322s 00:03:56.023 23:07:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.023 23:07:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.023 ************************************ 00:03:56.023 END TEST skip_rpc 00:03:56.023 ************************************ 00:03:56.023 23:07:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:56.023 23:07:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.023 23:07:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.023 23:07:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.023 23:07:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.023 ************************************ 00:03:56.023 START TEST skip_rpc_with_json 00:03:56.023 ************************************ 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2213138 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2213138 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2213138 ']' 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:56.023 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.280 [2024-07-15 23:07:11.340484] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:03:56.280 [2024-07-15 23:07:11.340582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213138 ] 00:03:56.280 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.280 [2024-07-15 23:07:11.398270] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.280 [2024-07-15 23:07:11.509863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.537 [2024-07-15 23:07:11.781265] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:56.537 request: 00:03:56.537 { 00:03:56.537 "trtype": "tcp", 00:03:56.537 "method": "nvmf_get_transports", 00:03:56.537 "req_id": 1 00:03:56.537 } 00:03:56.537 Got JSON-RPC error response 00:03:56.537 response: 00:03:56.537 { 00:03:56.537 "code": -19, 00:03:56.537 "message": "No such device" 00:03:56.537 } 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.537 [2024-07-15 23:07:11.789395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.537 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.795 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.795 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.795 { 00:03:56.795 "subsystems": [ 00:03:56.795 { 00:03:56.795 "subsystem": "vfio_user_target", 00:03:56.795 "config": null 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "keyring", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "iobuf", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "iobuf_set_options", 00:03:56.795 "params": { 00:03:56.795 "small_pool_count": 8192, 00:03:56.795 "large_pool_count": 1024, 00:03:56.795 "small_bufsize": 8192, 00:03:56.795 "large_bufsize": 135168 00:03:56.795 } 00:03:56.795 } 00:03:56.795 ] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "sock", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "sock_set_default_impl", 00:03:56.795 "params": { 00:03:56.795 "impl_name": "posix" 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "sock_impl_set_options", 00:03:56.795 "params": { 00:03:56.795 "impl_name": "ssl", 00:03:56.795 "recv_buf_size": 4096, 00:03:56.795 "send_buf_size": 4096, 00:03:56.795 "enable_recv_pipe": true, 00:03:56.795 "enable_quickack": false, 00:03:56.795 "enable_placement_id": 0, 00:03:56.795 "enable_zerocopy_send_server": true, 00:03:56.795 "enable_zerocopy_send_client": false, 00:03:56.795 "zerocopy_threshold": 0, 00:03:56.795 "tls_version": 0, 00:03:56.795 "enable_ktls": false 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "sock_impl_set_options", 00:03:56.795 "params": { 00:03:56.795 "impl_name": "posix", 00:03:56.795 "recv_buf_size": 2097152, 00:03:56.795 "send_buf_size": 2097152, 00:03:56.795 "enable_recv_pipe": true, 00:03:56.795 "enable_quickack": false, 00:03:56.795 "enable_placement_id": 0, 00:03:56.795 "enable_zerocopy_send_server": true, 00:03:56.795 "enable_zerocopy_send_client": false, 00:03:56.795 "zerocopy_threshold": 0, 00:03:56.795 "tls_version": 0, 00:03:56.795 "enable_ktls": false 00:03:56.795 } 00:03:56.795 } 00:03:56.795 ] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "vmd", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "accel", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "accel_set_options", 00:03:56.795 "params": { 00:03:56.795 "small_cache_size": 128, 00:03:56.795 "large_cache_size": 16, 00:03:56.795 "task_count": 2048, 00:03:56.795 "sequence_count": 2048, 00:03:56.795 "buf_count": 2048 00:03:56.795 } 00:03:56.795 } 00:03:56.795 ] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "bdev", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "bdev_set_options", 00:03:56.795 "params": { 00:03:56.795 "bdev_io_pool_size": 65535, 00:03:56.795 "bdev_io_cache_size": 256, 00:03:56.795 "bdev_auto_examine": true, 00:03:56.795 "iobuf_small_cache_size": 128, 00:03:56.795 "iobuf_large_cache_size": 16 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "bdev_raid_set_options", 00:03:56.795 "params": { 00:03:56.795 "process_window_size_kb": 1024 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "bdev_iscsi_set_options", 00:03:56.795 "params": { 00:03:56.795 "timeout_sec": 30 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "bdev_nvme_set_options", 00:03:56.795 "params": { 00:03:56.795 "action_on_timeout": "none", 00:03:56.795 "timeout_us": 0, 00:03:56.795 "timeout_admin_us": 0, 00:03:56.795 "keep_alive_timeout_ms": 10000, 00:03:56.795 "arbitration_burst": 0, 00:03:56.795 "low_priority_weight": 0, 00:03:56.795 "medium_priority_weight": 0, 00:03:56.795 "high_priority_weight": 0, 00:03:56.795 "nvme_adminq_poll_period_us": 10000, 00:03:56.795 "nvme_ioq_poll_period_us": 0, 00:03:56.795 "io_queue_requests": 0, 00:03:56.795 "delay_cmd_submit": true, 00:03:56.795 "transport_retry_count": 4, 00:03:56.795 "bdev_retry_count": 3, 00:03:56.795 "transport_ack_timeout": 0, 00:03:56.795 "ctrlr_loss_timeout_sec": 0, 00:03:56.795 "reconnect_delay_sec": 0, 00:03:56.795 "fast_io_fail_timeout_sec": 0, 00:03:56.795 "disable_auto_failback": false, 00:03:56.795 "generate_uuids": false, 00:03:56.795 "transport_tos": 0, 00:03:56.795 "nvme_error_stat": false, 00:03:56.795 "rdma_srq_size": 0, 00:03:56.795 "io_path_stat": false, 00:03:56.795 "allow_accel_sequence": false, 00:03:56.795 "rdma_max_cq_size": 0, 00:03:56.795 "rdma_cm_event_timeout_ms": 0, 00:03:56.795 "dhchap_digests": [ 00:03:56.795 "sha256", 00:03:56.795 "sha384", 00:03:56.795 "sha512" 00:03:56.795 ], 00:03:56.795 "dhchap_dhgroups": [ 00:03:56.795 "null", 00:03:56.795 "ffdhe2048", 00:03:56.795 "ffdhe3072", 00:03:56.795 "ffdhe4096", 00:03:56.795 "ffdhe6144", 00:03:56.795 "ffdhe8192" 00:03:56.795 ] 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "bdev_nvme_set_hotplug", 00:03:56.795 "params": { 00:03:56.795 "period_us": 100000, 00:03:56.795 "enable": false 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "bdev_wait_for_examine" 00:03:56.795 } 00:03:56.795 ] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "scsi", 00:03:56.795 "config": null 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "scheduler", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "framework_set_scheduler", 00:03:56.795 "params": { 00:03:56.795 "name": "static" 00:03:56.795 } 00:03:56.795 } 00:03:56.795 ] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "vhost_scsi", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "vhost_blk", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "ublk", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "nbd", 00:03:56.795 "config": [] 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "subsystem": "nvmf", 00:03:56.795 "config": [ 00:03:56.795 { 00:03:56.795 "method": "nvmf_set_config", 00:03:56.795 "params": { 00:03:56.795 "discovery_filter": "match_any", 00:03:56.795 "admin_cmd_passthru": { 00:03:56.795 "identify_ctrlr": false 00:03:56.795 } 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "nvmf_set_max_subsystems", 00:03:56.795 "params": { 00:03:56.795 "max_subsystems": 1024 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "nvmf_set_crdt", 00:03:56.795 "params": { 00:03:56.795 "crdt1": 0, 00:03:56.795 "crdt2": 0, 00:03:56.795 "crdt3": 0 00:03:56.795 } 00:03:56.795 }, 00:03:56.795 { 00:03:56.795 "method": "nvmf_create_transport", 00:03:56.795 "params": { 00:03:56.795 "trtype": "TCP", 00:03:56.795 "max_queue_depth": 128, 00:03:56.795 "max_io_qpairs_per_ctrlr": 127, 00:03:56.795 "in_capsule_data_size": 4096, 00:03:56.795 "max_io_size": 131072, 00:03:56.795 "io_unit_size": 131072, 00:03:56.795 "max_aq_depth": 128, 00:03:56.795 "num_shared_buffers": 511, 00:03:56.795 "buf_cache_size": 4294967295, 00:03:56.795 "dif_insert_or_strip": false, 00:03:56.795 "zcopy": false, 00:03:56.795 "c2h_success": true, 00:03:56.795 "sock_priority": 0, 00:03:56.795 "abort_timeout_sec": 1, 00:03:56.795 "ack_timeout": 0, 00:03:56.796 "data_wr_pool_size": 0 00:03:56.796 } 00:03:56.796 } 00:03:56.796 ] 00:03:56.796 }, 00:03:56.796 { 00:03:56.796 "subsystem": "iscsi", 00:03:56.796 "config": [ 00:03:56.796 { 00:03:56.796 "method": "iscsi_set_options", 00:03:56.796 "params": { 00:03:56.796 "node_base": "iqn.2016-06.io.spdk", 00:03:56.796 "max_sessions": 128, 00:03:56.796 "max_connections_per_session": 2, 00:03:56.796 "max_queue_depth": 64, 00:03:56.796 "default_time2wait": 2, 00:03:56.796 "default_time2retain": 20, 00:03:56.796 "first_burst_length": 8192, 00:03:56.796 "immediate_data": true, 00:03:56.796 "allow_duplicated_isid": false, 00:03:56.796 "error_recovery_level": 0, 00:03:56.796 "nop_timeout": 60, 00:03:56.796 "nop_in_interval": 30, 00:03:56.796 "disable_chap": false, 00:03:56.796 "require_chap": false, 00:03:56.796 "mutual_chap": false, 00:03:56.796 "chap_group": 0, 00:03:56.796 "max_large_datain_per_connection": 64, 00:03:56.796 "max_r2t_per_connection": 4, 00:03:56.796 "pdu_pool_size": 36864, 00:03:56.796 "immediate_data_pool_size": 16384, 00:03:56.796 "data_out_pool_size": 2048 00:03:56.796 } 00:03:56.796 } 00:03:56.796 ] 00:03:56.796 } 00:03:56.796 ] 00:03:56.796 } 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2213138 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2213138 ']' 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2213138 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213138 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213138' 00:03:56.796 killing process with pid 2213138 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2213138 00:03:56.796 23:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2213138 00:03:57.360 23:07:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2213278 00:03:57.360 23:07:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.360 23:07:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2213278 ']' 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213278' 00:04:02.618 killing process with pid 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2213278 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.618 00:04:02.618 real 0m6.641s 00:04:02.618 user 0m6.219s 00:04:02.618 sys 0m0.693s 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.618 23:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.618 ************************************ 00:04:02.618 END TEST skip_rpc_with_json 00:04:02.618 ************************************ 00:04:02.877 23:07:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:02.877 23:07:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:02.877 23:07:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.877 23:07:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.877 23:07:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.877 ************************************ 00:04:02.877 START TEST skip_rpc_with_delay 00:04:02.877 ************************************ 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:02.877 23:07:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:02.877 [2024-07-15 23:07:18.024965] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:02.877 [2024-07-15 23:07:18.025092] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:02.877 00:04:02.877 real 0m0.067s 00:04:02.877 user 0m0.043s 00:04:02.877 sys 0m0.023s 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.877 23:07:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:02.877 ************************************ 00:04:02.877 END TEST skip_rpc_with_delay 00:04:02.877 ************************************ 00:04:02.877 23:07:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:02.877 23:07:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:02.877 23:07:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:02.877 23:07:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:02.877 23:07:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.877 23:07:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.877 23:07:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.877 ************************************ 00:04:02.877 START TEST exit_on_failed_rpc_init 00:04:02.877 ************************************ 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2213996 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2213996 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2213996 ']' 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.877 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.877 [2024-07-15 23:07:18.135210] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:02.877 [2024-07-15 23:07:18.135310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213996 ] 00:04:02.877 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.135 [2024-07-15 23:07:18.193250] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.135 [2024-07-15 23:07:18.304608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.393 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.393 [2024-07-15 23:07:18.630297] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:03.393 [2024-07-15 23:07:18.630395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214122 ] 00:04:03.393 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.393 [2024-07-15 23:07:18.691919] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.651 [2024-07-15 23:07:18.812865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.651 [2024-07-15 23:07:18.812984] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:03.651 [2024-07-15 23:07:18.813002] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:03.651 [2024-07-15 23:07:18.813013] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2213996 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2213996 ']' 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2213996 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.651 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213996 00:04:03.909 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.909 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.909 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213996' 00:04:03.909 killing process with pid 2213996 00:04:03.909 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2213996 00:04:03.909 23:07:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2213996 00:04:04.166 00:04:04.166 real 0m1.348s 00:04:04.166 user 0m1.501s 00:04:04.166 sys 0m0.460s 00:04:04.166 23:07:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.166 23:07:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.166 ************************************ 00:04:04.166 END TEST exit_on_failed_rpc_init 00:04:04.166 ************************************ 00:04:04.166 23:07:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.166 23:07:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.166 00:04:04.166 real 0m13.790s 00:04:04.166 user 0m13.017s 00:04:04.166 sys 0m1.665s 00:04:04.166 23:07:19 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.166 23:07:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.166 ************************************ 00:04:04.166 END TEST skip_rpc 00:04:04.166 ************************************ 00:04:04.166 23:07:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.166 23:07:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.166 23:07:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.166 23:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.166 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:04:04.424 ************************************ 00:04:04.424 START TEST rpc_client 00:04:04.424 ************************************ 00:04:04.424 23:07:19 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.424 * Looking for test storage... 00:04:04.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.424 23:07:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.424 OK 00:04:04.424 23:07:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.424 00:04:04.424 real 0m0.068s 00:04:04.424 user 0m0.024s 00:04:04.424 sys 0m0.049s 00:04:04.424 23:07:19 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.424 23:07:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.424 ************************************ 00:04:04.424 END TEST rpc_client 00:04:04.424 ************************************ 00:04:04.424 23:07:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.424 23:07:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.424 23:07:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.424 23:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.424 23:07:19 -- common/autotest_common.sh@10 -- # set +x 00:04:04.424 ************************************ 00:04:04.424 START TEST json_config 00:04:04.424 ************************************ 00:04:04.424 23:07:19 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.424 23:07:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.424 23:07:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.424 23:07:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.424 23:07:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.424 23:07:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.424 23:07:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.424 23:07:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.424 23:07:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@47 -- # : 0 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.424 23:07:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:04.424 INFO: JSON configuration test init 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:04.424 23:07:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.424 23:07:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.424 23:07:19 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.425 23:07:19 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.425 23:07:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.425 23:07:19 json_config -- json_config/common.sh@10 -- # shift 00:04:04.425 23:07:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.425 23:07:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.425 23:07:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.425 23:07:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.425 23:07:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.425 23:07:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2214372 00:04:04.425 23:07:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.425 23:07:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.425 Waiting for target to run... 00:04:04.425 23:07:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2214372 /var/tmp/spdk_tgt.sock 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 2214372 ']' 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.425 23:07:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.425 [2024-07-15 23:07:19.728045] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:04.425 [2024-07-15 23:07:19.728134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214372 ] 00:04:04.682 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.938 [2024-07-15 23:07:20.241938] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.195 [2024-07-15 23:07:20.346149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:05.452 23:07:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.452 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.452 23:07:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.452 23:07:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:05.452 23:07:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.723 23:07:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.723 23:07:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:08.723 23:07:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.723 23:07:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:08.980 23:07:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:08.980 23:07:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:08.980 23:07:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.980 23:07:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:08.980 23:07:24 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.980 23:07:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.237 MallocForNvmf0 00:04:09.237 23:07:24 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.237 23:07:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.494 MallocForNvmf1 00:04:09.494 23:07:24 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.494 23:07:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.751 [2024-07-15 23:07:24.888217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.751 23:07:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.751 23:07:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.009 23:07:25 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.009 23:07:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.266 23:07:25 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.266 23:07:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.524 23:07:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.524 23:07:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.782 [2024-07-15 23:07:25.871458] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.782 23:07:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:10.782 23:07:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.782 23:07:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.782 23:07:25 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:10.782 23:07:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.782 23:07:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.782 23:07:25 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:10.782 23:07:25 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.782 23:07:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.040 MallocBdevForConfigChangeCheck 00:04:11.040 23:07:26 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:11.040 23:07:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.040 23:07:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.040 23:07:26 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:11.040 23:07:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.298 23:07:26 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:11.298 INFO: shutting down applications... 00:04:11.298 23:07:26 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:11.298 23:07:26 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:11.298 23:07:26 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:11.298 23:07:26 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:13.198 Calling clear_iscsi_subsystem 00:04:13.198 Calling clear_nvmf_subsystem 00:04:13.198 Calling clear_nbd_subsystem 00:04:13.198 Calling clear_ublk_subsystem 00:04:13.198 Calling clear_vhost_blk_subsystem 00:04:13.198 Calling clear_vhost_scsi_subsystem 00:04:13.198 Calling clear_bdev_subsystem 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:13.198 23:07:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:13.455 23:07:28 json_config -- json_config/json_config.sh@345 -- # break 00:04:13.455 23:07:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:13.455 23:07:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:13.455 23:07:28 json_config -- json_config/common.sh@31 -- # local app=target 00:04:13.455 23:07:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.455 23:07:28 json_config -- json_config/common.sh@35 -- # [[ -n 2214372 ]] 00:04:13.455 23:07:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2214372 00:04:13.455 23:07:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.455 23:07:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.455 23:07:28 json_config -- json_config/common.sh@41 -- # kill -0 2214372 00:04:13.455 23:07:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.021 23:07:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.021 23:07:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.021 23:07:29 json_config -- json_config/common.sh@41 -- # kill -0 2214372 00:04:14.021 23:07:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.021 23:07:29 json_config -- json_config/common.sh@43 -- # break 00:04:14.021 23:07:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.021 23:07:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.021 SPDK target shutdown done 00:04:14.021 23:07:29 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:14.021 INFO: relaunching applications... 00:04:14.021 23:07:29 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.021 23:07:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.021 23:07:29 json_config -- json_config/common.sh@10 -- # shift 00:04:14.021 23:07:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.021 23:07:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.021 23:07:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.021 23:07:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.021 23:07:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.021 23:07:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2215559 00:04:14.021 23:07:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.021 23:07:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.021 Waiting for target to run... 00:04:14.021 23:07:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2215559 /var/tmp/spdk_tgt.sock 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 2215559 ']' 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.021 23:07:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.021 [2024-07-15 23:07:29.201872] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:14.021 [2024-07-15 23:07:29.201978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215559 ] 00:04:14.021 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.280 [2024-07-15 23:07:29.568672] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.538 [2024-07-15 23:07:29.658917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.812 [2024-07-15 23:07:32.701645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.812 [2024-07-15 23:07:32.734156] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.812 23:07:32 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.812 23:07:32 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:17.812 23:07:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.812 00:04:17.812 23:07:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:17.812 23:07:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:17.812 INFO: Checking if target configuration is the same... 00:04:17.812 23:07:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.812 23:07:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:17.812 23:07:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.812 + '[' 2 -ne 2 ']' 00:04:17.812 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.812 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.812 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.812 +++ basename /dev/fd/62 00:04:17.812 ++ mktemp /tmp/62.XXX 00:04:17.812 + tmp_file_1=/tmp/62.zkF 00:04:17.812 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.812 + tmp_file_2=/tmp/spdk_tgt_config.json.pzM 00:04:17.812 + ret=0 00:04:17.812 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.070 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.070 + diff -u /tmp/62.zkF /tmp/spdk_tgt_config.json.pzM 00:04:18.070 + echo 'INFO: JSON config files are the same' 00:04:18.070 INFO: JSON config files are the same 00:04:18.070 + rm /tmp/62.zkF /tmp/spdk_tgt_config.json.pzM 00:04:18.070 + exit 0 00:04:18.070 23:07:33 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:18.070 23:07:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:18.070 INFO: changing configuration and checking if this can be detected... 00:04:18.070 23:07:33 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.070 23:07:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.327 23:07:33 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.327 23:07:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:18.327 23:07:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.327 + '[' 2 -ne 2 ']' 00:04:18.327 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:18.327 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:18.327 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.327 +++ basename /dev/fd/62 00:04:18.327 ++ mktemp /tmp/62.XXX 00:04:18.327 + tmp_file_1=/tmp/62.XXN 00:04:18.327 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.327 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.327 + tmp_file_2=/tmp/spdk_tgt_config.json.vFt 00:04:18.327 + ret=0 00:04:18.327 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.584 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.841 + diff -u /tmp/62.XXN /tmp/spdk_tgt_config.json.vFt 00:04:18.841 + ret=1 00:04:18.841 + echo '=== Start of file: /tmp/62.XXN ===' 00:04:18.841 + cat /tmp/62.XXN 00:04:18.841 + echo '=== End of file: /tmp/62.XXN ===' 00:04:18.841 + echo '' 00:04:18.841 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vFt ===' 00:04:18.841 + cat /tmp/spdk_tgt_config.json.vFt 00:04:18.841 + echo '=== End of file: /tmp/spdk_tgt_config.json.vFt ===' 00:04:18.841 + echo '' 00:04:18.841 + rm /tmp/62.XXN /tmp/spdk_tgt_config.json.vFt 00:04:18.841 + exit 1 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:18.841 INFO: configuration change detected. 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@317 -- # [[ -n 2215559 ]] 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.841 23:07:33 json_config -- json_config/json_config.sh@323 -- # killprocess 2215559 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@948 -- # '[' -z 2215559 ']' 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@952 -- # kill -0 2215559 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@953 -- # uname 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2215559 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2215559' 00:04:18.841 killing process with pid 2215559 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@967 -- # kill 2215559 00:04:18.841 23:07:33 json_config -- common/autotest_common.sh@972 -- # wait 2215559 00:04:20.793 23:07:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.793 23:07:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:20.793 23:07:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.793 23:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.793 23:07:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:20.793 23:07:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:20.793 INFO: Success 00:04:20.793 00:04:20.793 real 0m16.037s 00:04:20.793 user 0m17.922s 00:04:20.793 sys 0m2.050s 00:04:20.793 23:07:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.793 23:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.793 ************************************ 00:04:20.793 END TEST json_config 00:04:20.793 ************************************ 00:04:20.793 23:07:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.793 23:07:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.793 23:07:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.793 23:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.793 23:07:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.793 ************************************ 00:04:20.793 START TEST json_config_extra_key 00:04:20.793 ************************************ 00:04:20.793 23:07:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.793 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.793 23:07:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.794 23:07:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.794 23:07:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.794 23:07:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.794 23:07:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.794 23:07:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.794 23:07:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.794 23:07:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:20.794 23:07:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:20.794 23:07:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:20.794 INFO: launching applications... 00:04:20.794 23:07:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2216469 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.794 Waiting for target to run... 00:04:20.794 23:07:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2216469 /var/tmp/spdk_tgt.sock 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2216469 ']' 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.794 23:07:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.794 [2024-07-15 23:07:35.807678] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:20.794 [2024-07-15 23:07:35.807812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216469 ] 00:04:20.794 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.052 [2024-07-15 23:07:36.159767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.052 [2024-07-15 23:07:36.249313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.618 23:07:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.618 23:07:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:21.618 00:04:21.618 23:07:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:21.618 INFO: shutting down applications... 00:04:21.618 23:07:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2216469 ]] 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2216469 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2216469 00:04:21.618 23:07:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.183 23:07:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.183 23:07:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.183 23:07:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2216469 00:04:22.183 23:07:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2216469 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.440 23:07:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.440 SPDK target shutdown done 00:04:22.440 23:07:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:22.440 Success 00:04:22.440 00:04:22.440 real 0m2.050s 00:04:22.440 user 0m1.557s 00:04:22.440 sys 0m0.443s 00:04:22.440 23:07:37 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.440 23:07:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.440 ************************************ 00:04:22.440 END TEST json_config_extra_key 00:04:22.440 ************************************ 00:04:22.698 23:07:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.698 23:07:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.698 23:07:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.698 23:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.698 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.698 ************************************ 00:04:22.698 START TEST alias_rpc 00:04:22.698 ************************************ 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.698 * Looking for test storage... 00:04:22.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:22.698 23:07:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:22.698 23:07:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2216791 00:04:22.698 23:07:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.698 23:07:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2216791 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2216791 ']' 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.698 23:07:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.698 [2024-07-15 23:07:37.909462] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:22.698 [2024-07-15 23:07:37.909560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216791 ] 00:04:22.698 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.698 [2024-07-15 23:07:37.971502] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.955 [2024-07-15 23:07:38.089744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.212 23:07:38 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.212 23:07:38 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:23.212 23:07:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:23.468 23:07:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2216791 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2216791 ']' 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2216791 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216791 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216791' 00:04:23.468 killing process with pid 2216791 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@967 -- # kill 2216791 00:04:23.468 23:07:38 alias_rpc -- common/autotest_common.sh@972 -- # wait 2216791 00:04:24.033 00:04:24.033 real 0m1.298s 00:04:24.033 user 0m1.383s 00:04:24.033 sys 0m0.436s 00:04:24.033 23:07:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.033 23:07:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.033 ************************************ 00:04:24.033 END TEST alias_rpc 00:04:24.033 ************************************ 00:04:24.033 23:07:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.033 23:07:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:24.033 23:07:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.033 23:07:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.033 23:07:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.033 23:07:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.033 ************************************ 00:04:24.033 START TEST spdkcli_tcp 00:04:24.033 ************************************ 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.033 * Looking for test storage... 00:04:24.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2216976 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:24.033 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2216976 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2216976 ']' 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.033 23:07:39 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.034 23:07:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.034 23:07:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.034 23:07:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.034 [2024-07-15 23:07:39.256838] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:24.034 [2024-07-15 23:07:39.256914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216976 ] 00:04:24.034 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.034 [2024-07-15 23:07:39.317956] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.291 [2024-07-15 23:07:39.437766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.291 [2024-07-15 23:07:39.437789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.548 23:07:39 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.548 23:07:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:24.548 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2216987 00:04:24.548 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:24.548 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.865 [ 00:04:24.866 "bdev_malloc_delete", 00:04:24.866 "bdev_malloc_create", 00:04:24.866 "bdev_null_resize", 00:04:24.866 "bdev_null_delete", 00:04:24.866 "bdev_null_create", 00:04:24.866 "bdev_nvme_cuse_unregister", 00:04:24.866 "bdev_nvme_cuse_register", 00:04:24.866 "bdev_opal_new_user", 00:04:24.866 "bdev_opal_set_lock_state", 00:04:24.866 "bdev_opal_delete", 00:04:24.866 "bdev_opal_get_info", 00:04:24.866 "bdev_opal_create", 00:04:24.866 "bdev_nvme_opal_revert", 00:04:24.866 "bdev_nvme_opal_init", 00:04:24.866 "bdev_nvme_send_cmd", 00:04:24.866 "bdev_nvme_get_path_iostat", 00:04:24.866 "bdev_nvme_get_mdns_discovery_info", 00:04:24.866 "bdev_nvme_stop_mdns_discovery", 00:04:24.866 "bdev_nvme_start_mdns_discovery", 00:04:24.866 "bdev_nvme_set_multipath_policy", 00:04:24.866 "bdev_nvme_set_preferred_path", 00:04:24.866 "bdev_nvme_get_io_paths", 00:04:24.866 "bdev_nvme_remove_error_injection", 00:04:24.866 "bdev_nvme_add_error_injection", 00:04:24.866 "bdev_nvme_get_discovery_info", 00:04:24.866 "bdev_nvme_stop_discovery", 00:04:24.866 "bdev_nvme_start_discovery", 00:04:24.866 "bdev_nvme_get_controller_health_info", 00:04:24.866 "bdev_nvme_disable_controller", 00:04:24.866 "bdev_nvme_enable_controller", 00:04:24.866 "bdev_nvme_reset_controller", 00:04:24.866 "bdev_nvme_get_transport_statistics", 00:04:24.866 "bdev_nvme_apply_firmware", 00:04:24.866 "bdev_nvme_detach_controller", 00:04:24.866 "bdev_nvme_get_controllers", 00:04:24.866 "bdev_nvme_attach_controller", 00:04:24.866 "bdev_nvme_set_hotplug", 00:04:24.866 "bdev_nvme_set_options", 00:04:24.866 "bdev_passthru_delete", 00:04:24.866 "bdev_passthru_create", 00:04:24.866 "bdev_lvol_set_parent_bdev", 00:04:24.866 "bdev_lvol_set_parent", 00:04:24.866 "bdev_lvol_check_shallow_copy", 00:04:24.866 "bdev_lvol_start_shallow_copy", 00:04:24.866 "bdev_lvol_grow_lvstore", 00:04:24.866 "bdev_lvol_get_lvols", 00:04:24.866 "bdev_lvol_get_lvstores", 00:04:24.866 "bdev_lvol_delete", 00:04:24.866 "bdev_lvol_set_read_only", 00:04:24.866 "bdev_lvol_resize", 00:04:24.866 "bdev_lvol_decouple_parent", 00:04:24.866 "bdev_lvol_inflate", 00:04:24.866 "bdev_lvol_rename", 00:04:24.866 "bdev_lvol_clone_bdev", 00:04:24.866 "bdev_lvol_clone", 00:04:24.866 "bdev_lvol_snapshot", 00:04:24.866 "bdev_lvol_create", 00:04:24.866 "bdev_lvol_delete_lvstore", 00:04:24.866 "bdev_lvol_rename_lvstore", 00:04:24.866 "bdev_lvol_create_lvstore", 00:04:24.866 "bdev_raid_set_options", 00:04:24.866 "bdev_raid_remove_base_bdev", 00:04:24.866 "bdev_raid_add_base_bdev", 00:04:24.866 "bdev_raid_delete", 00:04:24.866 "bdev_raid_create", 00:04:24.866 "bdev_raid_get_bdevs", 00:04:24.866 "bdev_error_inject_error", 00:04:24.866 "bdev_error_delete", 00:04:24.866 "bdev_error_create", 00:04:24.866 "bdev_split_delete", 00:04:24.866 "bdev_split_create", 00:04:24.866 "bdev_delay_delete", 00:04:24.866 "bdev_delay_create", 00:04:24.866 "bdev_delay_update_latency", 00:04:24.866 "bdev_zone_block_delete", 00:04:24.866 "bdev_zone_block_create", 00:04:24.866 "blobfs_create", 00:04:24.866 "blobfs_detect", 00:04:24.866 "blobfs_set_cache_size", 00:04:24.866 "bdev_aio_delete", 00:04:24.866 "bdev_aio_rescan", 00:04:24.866 "bdev_aio_create", 00:04:24.866 "bdev_ftl_set_property", 00:04:24.866 "bdev_ftl_get_properties", 00:04:24.866 "bdev_ftl_get_stats", 00:04:24.866 "bdev_ftl_unmap", 00:04:24.866 "bdev_ftl_unload", 00:04:24.866 "bdev_ftl_delete", 00:04:24.866 "bdev_ftl_load", 00:04:24.866 "bdev_ftl_create", 00:04:24.866 "bdev_virtio_attach_controller", 00:04:24.866 "bdev_virtio_scsi_get_devices", 00:04:24.866 "bdev_virtio_detach_controller", 00:04:24.866 "bdev_virtio_blk_set_hotplug", 00:04:24.866 "bdev_iscsi_delete", 00:04:24.866 "bdev_iscsi_create", 00:04:24.866 "bdev_iscsi_set_options", 00:04:24.866 "accel_error_inject_error", 00:04:24.866 "ioat_scan_accel_module", 00:04:24.866 "dsa_scan_accel_module", 00:04:24.866 "iaa_scan_accel_module", 00:04:24.866 "vfu_virtio_create_scsi_endpoint", 00:04:24.866 "vfu_virtio_scsi_remove_target", 00:04:24.866 "vfu_virtio_scsi_add_target", 00:04:24.866 "vfu_virtio_create_blk_endpoint", 00:04:24.866 "vfu_virtio_delete_endpoint", 00:04:24.866 "keyring_file_remove_key", 00:04:24.866 "keyring_file_add_key", 00:04:24.866 "keyring_linux_set_options", 00:04:24.866 "iscsi_get_histogram", 00:04:24.866 "iscsi_enable_histogram", 00:04:24.866 "iscsi_set_options", 00:04:24.866 "iscsi_get_auth_groups", 00:04:24.866 "iscsi_auth_group_remove_secret", 00:04:24.866 "iscsi_auth_group_add_secret", 00:04:24.866 "iscsi_delete_auth_group", 00:04:24.866 "iscsi_create_auth_group", 00:04:24.866 "iscsi_set_discovery_auth", 00:04:24.866 "iscsi_get_options", 00:04:24.866 "iscsi_target_node_request_logout", 00:04:24.866 "iscsi_target_node_set_redirect", 00:04:24.866 "iscsi_target_node_set_auth", 00:04:24.866 "iscsi_target_node_add_lun", 00:04:24.866 "iscsi_get_stats", 00:04:24.866 "iscsi_get_connections", 00:04:24.866 "iscsi_portal_group_set_auth", 00:04:24.866 "iscsi_start_portal_group", 00:04:24.866 "iscsi_delete_portal_group", 00:04:24.866 "iscsi_create_portal_group", 00:04:24.866 "iscsi_get_portal_groups", 00:04:24.866 "iscsi_delete_target_node", 00:04:24.866 "iscsi_target_node_remove_pg_ig_maps", 00:04:24.866 "iscsi_target_node_add_pg_ig_maps", 00:04:24.866 "iscsi_create_target_node", 00:04:24.866 "iscsi_get_target_nodes", 00:04:24.866 "iscsi_delete_initiator_group", 00:04:24.866 "iscsi_initiator_group_remove_initiators", 00:04:24.866 "iscsi_initiator_group_add_initiators", 00:04:24.866 "iscsi_create_initiator_group", 00:04:24.866 "iscsi_get_initiator_groups", 00:04:24.866 "nvmf_set_crdt", 00:04:24.866 "nvmf_set_config", 00:04:24.866 "nvmf_set_max_subsystems", 00:04:24.866 "nvmf_stop_mdns_prr", 00:04:24.866 "nvmf_publish_mdns_prr", 00:04:24.866 "nvmf_subsystem_get_listeners", 00:04:24.866 "nvmf_subsystem_get_qpairs", 00:04:24.866 "nvmf_subsystem_get_controllers", 00:04:24.866 "nvmf_get_stats", 00:04:24.866 "nvmf_get_transports", 00:04:24.866 "nvmf_create_transport", 00:04:24.866 "nvmf_get_targets", 00:04:24.866 "nvmf_delete_target", 00:04:24.866 "nvmf_create_target", 00:04:24.866 "nvmf_subsystem_allow_any_host", 00:04:24.866 "nvmf_subsystem_remove_host", 00:04:24.866 "nvmf_subsystem_add_host", 00:04:24.866 "nvmf_ns_remove_host", 00:04:24.866 "nvmf_ns_add_host", 00:04:24.866 "nvmf_subsystem_remove_ns", 00:04:24.866 "nvmf_subsystem_add_ns", 00:04:24.866 "nvmf_subsystem_listener_set_ana_state", 00:04:24.866 "nvmf_discovery_get_referrals", 00:04:24.866 "nvmf_discovery_remove_referral", 00:04:24.866 "nvmf_discovery_add_referral", 00:04:24.866 "nvmf_subsystem_remove_listener", 00:04:24.866 "nvmf_subsystem_add_listener", 00:04:24.866 "nvmf_delete_subsystem", 00:04:24.866 "nvmf_create_subsystem", 00:04:24.866 "nvmf_get_subsystems", 00:04:24.866 "env_dpdk_get_mem_stats", 00:04:24.866 "nbd_get_disks", 00:04:24.866 "nbd_stop_disk", 00:04:24.866 "nbd_start_disk", 00:04:24.866 "ublk_recover_disk", 00:04:24.866 "ublk_get_disks", 00:04:24.866 "ublk_stop_disk", 00:04:24.866 "ublk_start_disk", 00:04:24.866 "ublk_destroy_target", 00:04:24.866 "ublk_create_target", 00:04:24.866 "virtio_blk_create_transport", 00:04:24.866 "virtio_blk_get_transports", 00:04:24.866 "vhost_controller_set_coalescing", 00:04:24.866 "vhost_get_controllers", 00:04:24.866 "vhost_delete_controller", 00:04:24.866 "vhost_create_blk_controller", 00:04:24.866 "vhost_scsi_controller_remove_target", 00:04:24.866 "vhost_scsi_controller_add_target", 00:04:24.866 "vhost_start_scsi_controller", 00:04:24.866 "vhost_create_scsi_controller", 00:04:24.866 "thread_set_cpumask", 00:04:24.866 "framework_get_governor", 00:04:24.866 "framework_get_scheduler", 00:04:24.866 "framework_set_scheduler", 00:04:24.866 "framework_get_reactors", 00:04:24.866 "thread_get_io_channels", 00:04:24.866 "thread_get_pollers", 00:04:24.866 "thread_get_stats", 00:04:24.866 "framework_monitor_context_switch", 00:04:24.866 "spdk_kill_instance", 00:04:24.866 "log_enable_timestamps", 00:04:24.866 "log_get_flags", 00:04:24.866 "log_clear_flag", 00:04:24.866 "log_set_flag", 00:04:24.866 "log_get_level", 00:04:24.866 "log_set_level", 00:04:24.866 "log_get_print_level", 00:04:24.866 "log_set_print_level", 00:04:24.866 "framework_enable_cpumask_locks", 00:04:24.866 "framework_disable_cpumask_locks", 00:04:24.866 "framework_wait_init", 00:04:24.866 "framework_start_init", 00:04:24.866 "scsi_get_devices", 00:04:24.866 "bdev_get_histogram", 00:04:24.866 "bdev_enable_histogram", 00:04:24.866 "bdev_set_qos_limit", 00:04:24.866 "bdev_set_qd_sampling_period", 00:04:24.866 "bdev_get_bdevs", 00:04:24.866 "bdev_reset_iostat", 00:04:24.866 "bdev_get_iostat", 00:04:24.866 "bdev_examine", 00:04:24.866 "bdev_wait_for_examine", 00:04:24.866 "bdev_set_options", 00:04:24.866 "notify_get_notifications", 00:04:24.866 "notify_get_types", 00:04:24.866 "accel_get_stats", 00:04:24.866 "accel_set_options", 00:04:24.866 "accel_set_driver", 00:04:24.866 "accel_crypto_key_destroy", 00:04:24.866 "accel_crypto_keys_get", 00:04:24.866 "accel_crypto_key_create", 00:04:24.866 "accel_assign_opc", 00:04:24.866 "accel_get_module_info", 00:04:24.866 "accel_get_opc_assignments", 00:04:24.866 "vmd_rescan", 00:04:24.866 "vmd_remove_device", 00:04:24.866 "vmd_enable", 00:04:24.866 "sock_get_default_impl", 00:04:24.866 "sock_set_default_impl", 00:04:24.866 "sock_impl_set_options", 00:04:24.866 "sock_impl_get_options", 00:04:24.866 "iobuf_get_stats", 00:04:24.866 "iobuf_set_options", 00:04:24.866 "keyring_get_keys", 00:04:24.866 "framework_get_pci_devices", 00:04:24.866 "framework_get_config", 00:04:24.866 "framework_get_subsystems", 00:04:24.866 "vfu_tgt_set_base_path", 00:04:24.866 "trace_get_info", 00:04:24.866 "trace_get_tpoint_group_mask", 00:04:24.866 "trace_disable_tpoint_group", 00:04:24.866 "trace_enable_tpoint_group", 00:04:24.867 "trace_clear_tpoint_mask", 00:04:24.867 "trace_set_tpoint_mask", 00:04:24.867 "spdk_get_version", 00:04:24.867 "rpc_get_methods" 00:04:24.867 ] 00:04:24.867 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.867 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:24.867 23:07:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2216976 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2216976 ']' 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2216976 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.867 23:07:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216976 00:04:24.867 23:07:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.867 23:07:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.867 23:07:40 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216976' 00:04:24.867 killing process with pid 2216976 00:04:24.867 23:07:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2216976 00:04:24.867 23:07:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2216976 00:04:25.431 00:04:25.431 real 0m1.339s 00:04:25.431 user 0m2.347s 00:04:25.431 sys 0m0.431s 00:04:25.431 23:07:40 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.431 23:07:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.431 ************************************ 00:04:25.431 END TEST spdkcli_tcp 00:04:25.431 ************************************ 00:04:25.431 23:07:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.431 23:07:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:25.431 23:07:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.431 23:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.431 23:07:40 -- common/autotest_common.sh@10 -- # set +x 00:04:25.431 ************************************ 00:04:25.431 START TEST dpdk_mem_utility 00:04:25.431 ************************************ 00:04:25.431 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:25.431 * Looking for test storage... 00:04:25.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:25.431 23:07:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.431 23:07:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2217182 00:04:25.431 23:07:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.432 23:07:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2217182 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2217182 ']' 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.432 23:07:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.432 [2024-07-15 23:07:40.637451] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:25.432 [2024-07-15 23:07:40.637537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217182 ] 00:04:25.432 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.432 [2024-07-15 23:07:40.694372] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.689 [2024-07-15 23:07:40.801419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.947 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.947 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:25.947 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:25.947 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:25.947 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.947 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.947 { 00:04:25.947 "filename": "/tmp/spdk_mem_dump.txt" 00:04:25.947 } 00:04:25.947 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.947 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.947 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:25.947 1 heaps totaling size 814.000000 MiB 00:04:25.947 size: 814.000000 MiB heap id: 0 00:04:25.947 end heaps---------- 00:04:25.947 8 mempools totaling size 598.116089 MiB 00:04:25.947 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:25.947 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:25.947 size: 84.521057 MiB name: bdev_io_2217182 00:04:25.947 size: 51.011292 MiB name: evtpool_2217182 00:04:25.947 size: 50.003479 MiB name: msgpool_2217182 00:04:25.947 size: 21.763794 MiB name: PDU_Pool 00:04:25.947 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:25.947 size: 0.026123 MiB name: Session_Pool 00:04:25.947 end mempools------- 00:04:25.947 6 memzones totaling size 4.142822 MiB 00:04:25.947 size: 1.000366 MiB name: RG_ring_0_2217182 00:04:25.947 size: 1.000366 MiB name: RG_ring_1_2217182 00:04:25.947 size: 1.000366 MiB name: RG_ring_4_2217182 00:04:25.947 size: 1.000366 MiB name: RG_ring_5_2217182 00:04:25.947 size: 0.125366 MiB name: RG_ring_2_2217182 00:04:25.947 size: 0.015991 MiB name: RG_ring_3_2217182 00:04:25.947 end memzones------- 00:04:25.947 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:25.947 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:25.947 list of free elements. size: 12.519348 MiB 00:04:25.947 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:25.947 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:25.947 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:25.947 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:25.947 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:25.947 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:25.947 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:25.947 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:25.947 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:25.947 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:25.947 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:25.947 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:25.947 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:25.947 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:25.947 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:25.947 list of standard malloc elements. size: 199.218079 MiB 00:04:25.947 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:25.947 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:25.947 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:25.947 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:25.947 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:25.947 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:25.947 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:25.947 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:25.947 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:25.947 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:25.947 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:25.947 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:25.947 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:25.947 list of memzone associated elements. size: 602.262573 MiB 00:04:25.947 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:25.947 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:25.947 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:25.947 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:25.947 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:25.947 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2217182_0 00:04:25.948 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:25.948 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2217182_0 00:04:25.948 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:25.948 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2217182_0 00:04:25.948 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:25.948 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:25.948 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:25.948 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:25.948 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:25.948 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2217182 00:04:25.948 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:25.948 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2217182 00:04:25.948 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:25.948 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2217182 00:04:25.948 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:25.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:25.948 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:25.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:25.948 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:25.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:25.948 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:25.948 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:25.948 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:25.948 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2217182 00:04:25.948 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:25.948 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2217182 00:04:25.948 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:25.948 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2217182 00:04:25.948 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:25.948 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2217182 00:04:25.948 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:25.948 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2217182 00:04:25.948 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:25.948 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:25.948 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:25.948 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:25.948 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:25.948 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:25.948 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:25.948 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2217182 00:04:25.948 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:25.948 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:25.948 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:25.948 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:25.948 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:25.948 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2217182 00:04:25.948 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:25.948 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:25.948 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:25.948 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2217182 00:04:25.948 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:25.948 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2217182 00:04:25.948 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:25.948 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:25.948 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:25.948 23:07:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2217182 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2217182 ']' 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2217182 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217182 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217182' 00:04:25.948 killing process with pid 2217182 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2217182 00:04:25.948 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2217182 00:04:26.513 00:04:26.513 real 0m1.132s 00:04:26.513 user 0m1.088s 00:04:26.513 sys 0m0.411s 00:04:26.513 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.513 23:07:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 ************************************ 00:04:26.513 END TEST dpdk_mem_utility 00:04:26.513 ************************************ 00:04:26.513 23:07:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.513 23:07:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.513 23:07:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.513 23:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.513 23:07:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 ************************************ 00:04:26.513 START TEST event 00:04:26.513 ************************************ 00:04:26.513 23:07:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.513 * Looking for test storage... 00:04:26.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:26.513 23:07:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:26.513 23:07:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:26.513 23:07:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.513 23:07:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:26.513 23:07:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.513 23:07:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 ************************************ 00:04:26.513 START TEST event_perf 00:04:26.513 ************************************ 00:04:26.513 23:07:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.513 Running I/O for 1 seconds...[2024-07-15 23:07:41.808985] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:26.514 [2024-07-15 23:07:41.809058] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217371 ] 00:04:26.770 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.770 [2024-07-15 23:07:41.871826] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.770 [2024-07-15 23:07:41.992917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.770 [2024-07-15 23:07:41.992998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:26.770 [2024-07-15 23:07:41.993065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.770 [2024-07-15 23:07:41.993068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.139 Running I/O for 1 seconds... 00:04:28.139 lcore 0: 225847 00:04:28.139 lcore 1: 225847 00:04:28.139 lcore 2: 225847 00:04:28.139 lcore 3: 225847 00:04:28.139 done. 00:04:28.139 00:04:28.139 real 0m1.320s 00:04:28.139 user 0m4.230s 00:04:28.139 sys 0m0.085s 00:04:28.139 23:07:43 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.139 23:07:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 END TEST event_perf 00:04:28.139 ************************************ 00:04:28.139 23:07:43 event -- common/autotest_common.sh@1142 -- # return 0 00:04:28.139 23:07:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.139 23:07:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:28.139 23:07:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.139 23:07:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 START TEST event_reactor 00:04:28.139 ************************************ 00:04:28.139 23:07:43 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.139 [2024-07-15 23:07:43.178577] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:28.139 [2024-07-15 23:07:43.178644] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217536 ] 00:04:28.139 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.139 [2024-07-15 23:07:43.242217] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.139 [2024-07-15 23:07:43.359891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.509 test_start 00:04:29.509 oneshot 00:04:29.509 tick 100 00:04:29.509 tick 100 00:04:29.509 tick 250 00:04:29.509 tick 100 00:04:29.509 tick 100 00:04:29.509 tick 100 00:04:29.509 tick 250 00:04:29.509 tick 500 00:04:29.509 tick 100 00:04:29.509 tick 100 00:04:29.509 tick 250 00:04:29.509 tick 100 00:04:29.509 tick 100 00:04:29.509 test_end 00:04:29.509 00:04:29.509 real 0m1.313s 00:04:29.509 user 0m1.229s 00:04:29.509 sys 0m0.079s 00:04:29.509 23:07:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.509 23:07:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:29.509 ************************************ 00:04:29.509 END TEST event_reactor 00:04:29.509 ************************************ 00:04:29.509 23:07:44 event -- common/autotest_common.sh@1142 -- # return 0 00:04:29.509 23:07:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.509 23:07:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:29.509 23:07:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.509 23:07:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.509 ************************************ 00:04:29.509 START TEST event_reactor_perf 00:04:29.509 ************************************ 00:04:29.509 23:07:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.509 [2024-07-15 23:07:44.537030] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:29.509 [2024-07-15 23:07:44.537112] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217806 ] 00:04:29.509 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.509 [2024-07-15 23:07:44.603914] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.509 [2024-07-15 23:07:44.723795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.880 test_start 00:04:30.880 test_end 00:04:30.880 Performance: 354653 events per second 00:04:30.880 00:04:30.880 real 0m1.324s 00:04:30.880 user 0m1.233s 00:04:30.880 sys 0m0.086s 00:04:30.880 23:07:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.880 23:07:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.880 ************************************ 00:04:30.880 END TEST event_reactor_perf 00:04:30.880 ************************************ 00:04:30.880 23:07:45 event -- common/autotest_common.sh@1142 -- # return 0 00:04:30.880 23:07:45 event -- event/event.sh@49 -- # uname -s 00:04:30.880 23:07:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:30.880 23:07:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:30.880 23:07:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.880 23:07:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.880 23:07:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.880 ************************************ 00:04:30.880 START TEST event_scheduler 00:04:30.880 ************************************ 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:30.880 * Looking for test storage... 00:04:30.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:30.880 23:07:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:30.880 23:07:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2217990 00:04:30.880 23:07:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:30.880 23:07:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.880 23:07:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2217990 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2217990 ']' 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.880 23:07:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.880 [2024-07-15 23:07:45.994436] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:30.880 [2024-07-15 23:07:45.994510] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217990 ] 00:04:30.880 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.880 [2024-07-15 23:07:46.052950] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:30.880 [2024-07-15 23:07:46.162254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.880 [2024-07-15 23:07:46.162312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.880 [2024-07-15 23:07:46.162377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.880 [2024-07-15 23:07:46.162380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:31.137 23:07:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 [2024-07-15 23:07:46.199219] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:31.137 [2024-07-15 23:07:46.199245] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:31.137 [2024-07-15 23:07:46.199270] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:31.137 [2024-07-15 23:07:46.199291] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:31.137 [2024-07-15 23:07:46.199309] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 [2024-07-15 23:07:46.284237] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 ************************************ 00:04:31.137 START TEST scheduler_create_thread 00:04:31.137 ************************************ 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 2 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 3 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 4 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 5 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 6 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 7 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 8 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 9 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 10 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.137 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.701 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.701 00:04:31.701 real 0m0.590s 00:04:31.701 user 0m0.012s 00:04:31.701 sys 0m0.002s 00:04:31.701 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.701 23:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.701 ************************************ 00:04:31.701 END TEST scheduler_create_thread 00:04:31.701 ************************************ 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:31.701 23:07:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:31.701 23:07:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2217990 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2217990 ']' 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2217990 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217990 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217990' 00:04:31.701 killing process with pid 2217990 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2217990 00:04:31.701 23:07:46 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2217990 00:04:32.265 [2024-07-15 23:07:47.384421] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:32.523 00:04:32.523 real 0m1.749s 00:04:32.523 user 0m2.197s 00:04:32.523 sys 0m0.312s 00:04:32.523 23:07:47 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.523 23:07:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 ************************************ 00:04:32.523 END TEST event_scheduler 00:04:32.523 ************************************ 00:04:32.523 23:07:47 event -- common/autotest_common.sh@1142 -- # return 0 00:04:32.523 23:07:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:32.523 23:07:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:32.523 23:07:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.523 23:07:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.523 23:07:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 ************************************ 00:04:32.523 START TEST app_repeat 00:04:32.523 ************************************ 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2218182 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2218182' 00:04:32.523 Process app_repeat pid: 2218182 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:32.523 spdk_app_start Round 0 00:04:32.523 23:07:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2218182 /var/tmp/spdk-nbd.sock 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2218182 ']' 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.523 23:07:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.523 [2024-07-15 23:07:47.724327] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:32.523 [2024-07-15 23:07:47.724398] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218182 ] 00:04:32.523 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.523 [2024-07-15 23:07:47.783859] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.780 [2024-07-15 23:07:47.897637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.780 [2024-07-15 23:07:47.897641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.780 23:07:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.780 23:07:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:32.780 23:07:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.037 Malloc0 00:04:33.037 23:07:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.294 Malloc1 00:04:33.294 23:07:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.294 23:07:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.295 23:07:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.295 23:07:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.295 23:07:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.295 23:07:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.295 23:07:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.552 /dev/nbd0 00:04:33.552 23:07:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.552 23:07:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.552 1+0 records in 00:04:33.552 1+0 records out 00:04:33.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018167 s, 22.5 MB/s 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:33.552 23:07:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:33.552 23:07:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.552 23:07:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.552 23:07:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.809 /dev/nbd1 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.809 1+0 records in 00:04:33.809 1+0 records out 00:04:33.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021313 s, 19.2 MB/s 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:33.809 23:07:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.809 23:07:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.066 23:07:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.066 { 00:04:34.066 "nbd_device": "/dev/nbd0", 00:04:34.066 "bdev_name": "Malloc0" 00:04:34.066 }, 00:04:34.066 { 00:04:34.066 "nbd_device": "/dev/nbd1", 00:04:34.066 "bdev_name": "Malloc1" 00:04:34.066 } 00:04:34.066 ]' 00:04:34.066 23:07:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.066 { 00:04:34.066 "nbd_device": "/dev/nbd0", 00:04:34.066 "bdev_name": "Malloc0" 00:04:34.066 }, 00:04:34.066 { 00:04:34.066 "nbd_device": "/dev/nbd1", 00:04:34.066 "bdev_name": "Malloc1" 00:04:34.066 } 00:04:34.066 ]' 00:04:34.066 23:07:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.323 /dev/nbd1' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.323 /dev/nbd1' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.323 256+0 records in 00:04:34.323 256+0 records out 00:04:34.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493562 s, 212 MB/s 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.323 256+0 records in 00:04:34.323 256+0 records out 00:04:34.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235404 s, 44.5 MB/s 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.323 256+0 records in 00:04:34.323 256+0 records out 00:04:34.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226788 s, 46.2 MB/s 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.323 23:07:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.581 23:07:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.839 23:07:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.097 23:07:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.097 23:07:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.354 23:07:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.612 [2024-07-15 23:07:50.825933] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.870 [2024-07-15 23:07:50.943030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.870 [2024-07-15 23:07:50.943030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.870 [2024-07-15 23:07:51.005812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.870 [2024-07-15 23:07:51.005878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.395 23:07:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.395 23:07:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.395 spdk_app_start Round 1 00:04:38.395 23:07:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2218182 /var/tmp/spdk-nbd.sock 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2218182 ']' 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.395 23:07:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.653 23:07:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.653 23:07:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:38.653 23:07:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.910 Malloc0 00:04:38.910 23:07:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.166 Malloc1 00:04:39.166 23:07:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.166 23:07:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.422 /dev/nbd0 00:04:39.422 23:07:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.422 23:07:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.422 1+0 records in 00:04:39.422 1+0 records out 00:04:39.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156184 s, 26.2 MB/s 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.422 23:07:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.422 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.422 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.422 23:07:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.680 /dev/nbd1 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.680 1+0 records in 00:04:39.680 1+0 records out 00:04:39.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184199 s, 22.2 MB/s 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.680 23:07:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.680 23:07:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.937 { 00:04:39.937 "nbd_device": "/dev/nbd0", 00:04:39.937 "bdev_name": "Malloc0" 00:04:39.937 }, 00:04:39.937 { 00:04:39.937 "nbd_device": "/dev/nbd1", 00:04:39.937 "bdev_name": "Malloc1" 00:04:39.937 } 00:04:39.937 ]' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.937 { 00:04:39.937 "nbd_device": "/dev/nbd0", 00:04:39.937 "bdev_name": "Malloc0" 00:04:39.937 }, 00:04:39.937 { 00:04:39.937 "nbd_device": "/dev/nbd1", 00:04:39.937 "bdev_name": "Malloc1" 00:04:39.937 } 00:04:39.937 ]' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.937 /dev/nbd1' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.937 /dev/nbd1' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.937 256+0 records in 00:04:39.937 256+0 records out 00:04:39.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506468 s, 207 MB/s 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.937 256+0 records in 00:04:39.937 256+0 records out 00:04:39.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021199 s, 49.5 MB/s 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.937 256+0 records in 00:04:39.937 256+0 records out 00:04:39.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281998 s, 37.2 MB/s 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.937 23:07:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.938 23:07:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.194 23:07:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.492 23:07:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.749 23:07:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.749 23:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.749 23:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.006 23:07:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.006 23:07:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.262 23:07:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.520 [2024-07-15 23:07:56.624871] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.520 [2024-07-15 23:07:56.741712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.520 [2024-07-15 23:07:56.741718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.520 [2024-07-15 23:07:56.800934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.520 [2024-07-15 23:07:56.800998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.073 23:07:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.073 23:07:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.073 spdk_app_start Round 2 00:04:44.073 23:07:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2218182 /var/tmp/spdk-nbd.sock 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2218182 ']' 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.073 23:07:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.329 23:07:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.329 23:07:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:44.329 23:07:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.586 Malloc0 00:04:44.586 23:07:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.844 Malloc1 00:04:44.844 23:08:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.844 23:08:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.101 /dev/nbd0 00:04:45.101 23:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.101 23:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.101 1+0 records in 00:04:45.101 1+0 records out 00:04:45.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186849 s, 21.9 MB/s 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.101 23:08:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:45.101 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.101 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.101 23:08:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.357 /dev/nbd1 00:04:45.357 23:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.357 23:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.357 23:08:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.614 1+0 records in 00:04:45.614 1+0 records out 00:04:45.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209862 s, 19.5 MB/s 00:04:45.614 23:08:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.614 23:08:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:45.614 23:08:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.614 23:08:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.614 23:08:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.614 { 00:04:45.614 "nbd_device": "/dev/nbd0", 00:04:45.614 "bdev_name": "Malloc0" 00:04:45.614 }, 00:04:45.614 { 00:04:45.614 "nbd_device": "/dev/nbd1", 00:04:45.614 "bdev_name": "Malloc1" 00:04:45.614 } 00:04:45.614 ]' 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.614 { 00:04:45.614 "nbd_device": "/dev/nbd0", 00:04:45.614 "bdev_name": "Malloc0" 00:04:45.614 }, 00:04:45.614 { 00:04:45.614 "nbd_device": "/dev/nbd1", 00:04:45.614 "bdev_name": "Malloc1" 00:04:45.614 } 00:04:45.614 ]' 00:04:45.614 23:08:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.872 /dev/nbd1' 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.872 /dev/nbd1' 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.872 256+0 records in 00:04:45.872 256+0 records out 00:04:45.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507952 s, 206 MB/s 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.872 23:08:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.872 256+0 records in 00:04:45.872 256+0 records out 00:04:45.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268753 s, 39.0 MB/s 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.872 256+0 records in 00:04:45.872 256+0 records out 00:04:45.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257825 s, 40.7 MB/s 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.872 23:08:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.873 23:08:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.129 23:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.130 23:08:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.387 23:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.644 23:08:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.644 23:08:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.901 23:08:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.158 [2024-07-15 23:08:02.413830] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.415 [2024-07-15 23:08:02.530569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.415 [2024-07-15 23:08:02.530571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.415 [2024-07-15 23:08:02.594875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.415 [2024-07-15 23:08:02.594947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.938 23:08:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2218182 /var/tmp/spdk-nbd.sock 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2218182 ']' 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.938 23:08:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.195 23:08:05 event.app_repeat -- event/event.sh@39 -- # killprocess 2218182 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2218182 ']' 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2218182 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218182 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218182' 00:04:50.195 killing process with pid 2218182 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2218182 00:04:50.195 23:08:05 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2218182 00:04:50.452 spdk_app_start is called in Round 0. 00:04:50.452 Shutdown signal received, stop current app iteration 00:04:50.452 Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 reinitialization... 00:04:50.452 spdk_app_start is called in Round 1. 00:04:50.452 Shutdown signal received, stop current app iteration 00:04:50.452 Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 reinitialization... 00:04:50.452 spdk_app_start is called in Round 2. 00:04:50.452 Shutdown signal received, stop current app iteration 00:04:50.452 Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 reinitialization... 00:04:50.452 spdk_app_start is called in Round 3. 00:04:50.452 Shutdown signal received, stop current app iteration 00:04:50.452 23:08:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.452 23:08:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.452 00:04:50.452 real 0m17.976s 00:04:50.452 user 0m38.823s 00:04:50.452 sys 0m3.229s 00:04:50.452 23:08:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.452 23:08:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.452 ************************************ 00:04:50.452 END TEST app_repeat 00:04:50.452 ************************************ 00:04:50.452 23:08:05 event -- common/autotest_common.sh@1142 -- # return 0 00:04:50.452 23:08:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.452 23:08:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.452 23:08:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.452 23:08:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.452 23:08:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.452 ************************************ 00:04:50.452 START TEST cpu_locks 00:04:50.452 ************************************ 00:04:50.452 23:08:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.710 * Looking for test storage... 00:04:50.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.710 23:08:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.710 23:08:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.710 23:08:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.710 23:08:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.710 23:08:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.710 23:08:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.710 23:08:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.710 ************************************ 00:04:50.710 START TEST default_locks 00:04:50.710 ************************************ 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2220599 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2220599 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2220599 ']' 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.710 23:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.710 [2024-07-15 23:08:05.856931] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:50.710 [2024-07-15 23:08:05.857014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220599 ] 00:04:50.710 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.710 [2024-07-15 23:08:05.913973] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.710 [2024-07-15 23:08:06.021439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.275 lslocks: write error 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2220599 ']' 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220599' 00:04:51.275 killing process with pid 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2220599 00:04:51.275 23:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2220599 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2220599 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2220599 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2220599 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2220599 ']' 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2220599) - No such process 00:04:51.840 ERROR: process (pid: 2220599) is no longer running 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.840 00:04:51.840 real 0m1.236s 00:04:51.840 user 0m1.133s 00:04:51.840 sys 0m0.537s 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.840 23:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 END TEST default_locks 00:04:51.840 ************************************ 00:04:51.840 23:08:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:51.840 23:08:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:51.840 23:08:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.840 23:08:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.840 23:08:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 START TEST default_locks_via_rpc 00:04:51.840 ************************************ 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2220827 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2220827 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2220827 ']' 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.840 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 [2024-07-15 23:08:07.141434] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:51.840 [2024-07-15 23:08:07.141520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220827 ] 00:04:52.098 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.098 [2024-07-15 23:08:07.203913] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.098 [2024-07-15 23:08:07.319437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2220827 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2220827 00:04:52.355 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2220827 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2220827 ']' 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2220827 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220827 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220827' 00:04:52.918 killing process with pid 2220827 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2220827 00:04:52.918 23:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2220827 00:04:53.176 00:04:53.176 real 0m1.335s 00:04:53.176 user 0m1.270s 00:04:53.176 sys 0m0.540s 00:04:53.176 23:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.176 23:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.176 ************************************ 00:04:53.176 END TEST default_locks_via_rpc 00:04:53.176 ************************************ 00:04:53.176 23:08:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:53.176 23:08:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:53.176 23:08:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.176 23:08:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.176 23:08:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.176 ************************************ 00:04:53.176 START TEST non_locking_app_on_locked_coremask 00:04:53.176 ************************************ 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2220999 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2220999 /var/tmp/spdk.sock 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2220999 ']' 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.176 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.433 [2024-07-15 23:08:08.526325] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:53.433 [2024-07-15 23:08:08.526425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220999 ] 00:04:53.433 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.433 [2024-07-15 23:08:08.584054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.433 [2024-07-15 23:08:08.690515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2221008 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2221008 /var/tmp/spdk2.sock 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2221008 ']' 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.690 23:08:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.946 [2024-07-15 23:08:09.010065] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:53.946 [2024-07-15 23:08:09.010166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221008 ] 00:04:53.946 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.946 [2024-07-15 23:08:09.106987] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.946 [2024-07-15 23:08:09.107019] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.203 [2024-07-15 23:08:09.346607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.768 23:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.768 23:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:54.768 23:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2220999 00:04:54.768 23:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2220999 00:04:54.768 23:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.333 lslocks: write error 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2220999 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2220999 ']' 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2220999 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220999 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220999' 00:04:55.333 killing process with pid 2220999 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2220999 00:04:55.333 23:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2220999 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2221008 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2221008 ']' 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2221008 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221008 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221008' 00:04:56.264 killing process with pid 2221008 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2221008 00:04:56.264 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2221008 00:04:56.521 00:04:56.521 real 0m3.327s 00:04:56.521 user 0m3.488s 00:04:56.521 sys 0m1.038s 00:04:56.521 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.521 23:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.521 ************************************ 00:04:56.521 END TEST non_locking_app_on_locked_coremask 00:04:56.521 ************************************ 00:04:56.521 23:08:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:56.521 23:08:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:56.521 23:08:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.521 23:08:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.521 23:08:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.779 ************************************ 00:04:56.780 START TEST locking_app_on_unlocked_coremask 00:04:56.780 ************************************ 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2221431 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2221431 /var/tmp/spdk.sock 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2221431 ']' 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.780 23:08:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.780 [2024-07-15 23:08:11.899318] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:56.780 [2024-07-15 23:08:11.899397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221431 ] 00:04:56.780 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.780 [2024-07-15 23:08:11.964819] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.780 [2024-07-15 23:08:11.964869] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.780 [2024-07-15 23:08:12.084220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2221448 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2221448 /var/tmp/spdk2.sock 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2221448 ']' 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.346 23:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.346 [2024-07-15 23:08:12.407500] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:04:57.346 [2024-07-15 23:08:12.407583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221448 ] 00:04:57.346 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.346 [2024-07-15 23:08:12.504999] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.604 [2024-07-15 23:08:12.746619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.168 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.168 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:58.168 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2221448 00:04:58.168 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2221448 00:04:58.168 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.731 lslocks: write error 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2221431 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2221431 ']' 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2221431 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221431 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221431' 00:04:58.731 killing process with pid 2221431 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2221431 00:04:58.731 23:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2221431 00:04:59.663 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2221448 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2221448 ']' 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2221448 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221448 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221448' 00:04:59.664 killing process with pid 2221448 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2221448 00:04:59.664 23:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2221448 00:05:00.229 00:05:00.229 real 0m3.492s 00:05:00.229 user 0m3.636s 00:05:00.229 sys 0m1.104s 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.229 ************************************ 00:05:00.229 END TEST locking_app_on_unlocked_coremask 00:05:00.229 ************************************ 00:05:00.229 23:08:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:00.229 23:08:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:00.229 23:08:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.229 23:08:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.229 23:08:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.229 ************************************ 00:05:00.229 START TEST locking_app_on_locked_coremask 00:05:00.229 ************************************ 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2221877 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2221877 /var/tmp/spdk.sock 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2221877 ']' 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.229 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.229 [2024-07-15 23:08:15.443816] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:00.229 [2024-07-15 23:08:15.443910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221877 ] 00:05:00.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.229 [2024-07-15 23:08:15.501322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.487 [2024-07-15 23:08:15.613453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2221900 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2221900 /var/tmp/spdk2.sock 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2221900 /var/tmp/spdk2.sock 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2221900 /var/tmp/spdk2.sock 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2221900 ']' 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.746 23:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.746 [2024-07-15 23:08:15.935309] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:00.746 [2024-07-15 23:08:15.935395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221900 ] 00:05:00.746 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.746 [2024-07-15 23:08:16.034195] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2221877 has claimed it. 00:05:00.746 [2024-07-15 23:08:16.034244] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2221900) - No such process 00:05:01.311 ERROR: process (pid: 2221900) is no longer running 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2221877 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2221877 00:05:01.311 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.875 lslocks: write error 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2221877 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2221877 ']' 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2221877 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221877 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221877' 00:05:01.875 killing process with pid 2221877 00:05:01.875 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2221877 00:05:01.876 23:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2221877 00:05:02.132 00:05:02.132 real 0m2.008s 00:05:02.132 user 0m2.163s 00:05:02.132 sys 0m0.651s 00:05:02.132 23:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.132 23:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.132 ************************************ 00:05:02.132 END TEST locking_app_on_locked_coremask 00:05:02.132 ************************************ 00:05:02.132 23:08:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:02.132 23:08:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:02.132 23:08:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.132 23:08:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.132 23:08:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.132 ************************************ 00:05:02.132 START TEST locking_overlapped_coremask 00:05:02.132 ************************************ 00:05:02.132 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2222167 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2222167 /var/tmp/spdk.sock 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2222167 ']' 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.389 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.390 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.390 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.390 [2024-07-15 23:08:17.498216] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:02.390 [2024-07-15 23:08:17.498278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222167 ] 00:05:02.390 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.390 [2024-07-15 23:08:17.555435] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.390 [2024-07-15 23:08:17.664266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.390 [2024-07-15 23:08:17.664331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.390 [2024-07-15 23:08:17.664335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2222182 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2222182 /var/tmp/spdk2.sock 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2222182 /var/tmp/spdk2.sock 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2222182 /var/tmp/spdk2.sock 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2222182 ']' 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.646 23:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.903 [2024-07-15 23:08:17.969818] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:02.903 [2024-07-15 23:08:17.969900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222182 ] 00:05:02.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.903 [2024-07-15 23:08:18.058346] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2222167 has claimed it. 00:05:02.903 [2024-07-15 23:08:18.058398] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:03.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2222182) - No such process 00:05:03.467 ERROR: process (pid: 2222182) is no longer running 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2222167 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2222167 ']' 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2222167 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222167 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222167' 00:05:03.467 killing process with pid 2222167 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2222167 00:05:03.467 23:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2222167 00:05:04.032 00:05:04.032 real 0m1.692s 00:05:04.032 user 0m4.475s 00:05:04.032 sys 0m0.451s 00:05:04.032 23:08:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.032 23:08:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.032 ************************************ 00:05:04.032 END TEST locking_overlapped_coremask 00:05:04.032 ************************************ 00:05:04.032 23:08:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:04.032 23:08:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:04.032 23:08:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.032 23:08:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.032 23:08:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.032 ************************************ 00:05:04.032 START TEST locking_overlapped_coremask_via_rpc 00:05:04.032 ************************************ 00:05:04.032 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:04.032 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2222344 00:05:04.032 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2222344 /var/tmp/spdk.sock 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2222344 ']' 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.033 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.033 [2024-07-15 23:08:19.242615] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:04.033 [2024-07-15 23:08:19.242686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222344 ] 00:05:04.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.033 [2024-07-15 23:08:19.304408] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.033 [2024-07-15 23:08:19.304448] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.334 [2024-07-15 23:08:19.419039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.334 [2024-07-15 23:08:19.419092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.334 [2024-07-15 23:08:19.419095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2222474 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2222474 /var/tmp/spdk2.sock 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2222474 ']' 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.616 23:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.616 [2024-07-15 23:08:19.737216] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:04.616 [2024-07-15 23:08:19.737299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222474 ] 00:05:04.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.616 [2024-07-15 23:08:19.825309] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.616 [2024-07-15 23:08:19.825355] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.873 [2024-07-15 23:08:20.057628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.873 [2024-07-15 23:08:20.057678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:04.873 [2024-07-15 23:08:20.057680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.437 [2024-07-15 23:08:20.697844] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2222344 has claimed it. 00:05:05.437 request: 00:05:05.437 { 00:05:05.437 "method": "framework_enable_cpumask_locks", 00:05:05.437 "req_id": 1 00:05:05.437 } 00:05:05.437 Got JSON-RPC error response 00:05:05.437 response: 00:05:05.437 { 00:05:05.437 "code": -32603, 00:05:05.437 "message": "Failed to claim CPU core: 2" 00:05:05.437 } 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2222344 /var/tmp/spdk.sock 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2222344 ']' 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.437 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2222474 /var/tmp/spdk2.sock 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2222474 ']' 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.694 23:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.951 00:05:05.951 real 0m2.040s 00:05:05.951 user 0m1.086s 00:05:05.951 sys 0m0.171s 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.951 23:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.951 ************************************ 00:05:05.951 END TEST locking_overlapped_coremask_via_rpc 00:05:05.951 ************************************ 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.951 23:08:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:05.951 23:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2222344 ]] 00:05:05.951 23:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2222344 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2222344 ']' 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2222344 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.951 23:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222344 00:05:06.208 23:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.208 23:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.208 23:08:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222344' 00:05:06.209 killing process with pid 2222344 00:05:06.209 23:08:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2222344 00:05:06.209 23:08:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2222344 00:05:06.465 23:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2222474 ]] 00:05:06.465 23:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2222474 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2222474 ']' 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2222474 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222474 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222474' 00:05:06.465 killing process with pid 2222474 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2222474 00:05:06.465 23:08:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2222474 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2222344 ]] 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2222344 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2222344 ']' 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2222344 00:05:07.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2222344) - No such process 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2222344 is not found' 00:05:07.030 Process with pid 2222344 is not found 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2222474 ]] 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2222474 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2222474 ']' 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2222474 00:05:07.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2222474) - No such process 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2222474 is not found' 00:05:07.030 Process with pid 2222474 is not found 00:05:07.030 23:08:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:07.030 00:05:07.030 real 0m16.503s 00:05:07.030 user 0m28.441s 00:05:07.030 sys 0m5.420s 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.030 23:08:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.030 ************************************ 00:05:07.030 END TEST cpu_locks 00:05:07.031 ************************************ 00:05:07.031 23:08:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.031 00:05:07.031 real 0m40.531s 00:05:07.031 user 1m16.296s 00:05:07.031 sys 0m9.437s 00:05:07.031 23:08:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.031 23:08:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.031 ************************************ 00:05:07.031 END TEST event 00:05:07.031 ************************************ 00:05:07.031 23:08:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.031 23:08:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:07.031 23:08:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.031 23:08:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.031 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.031 ************************************ 00:05:07.031 START TEST thread 00:05:07.031 ************************************ 00:05:07.031 23:08:22 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:07.288 * Looking for test storage... 00:05:07.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:07.288 23:08:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:07.288 23:08:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:07.288 23:08:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.288 23:08:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.288 ************************************ 00:05:07.288 START TEST thread_poller_perf 00:05:07.288 ************************************ 00:05:07.288 23:08:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:07.288 [2024-07-15 23:08:22.391452] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:07.288 [2024-07-15 23:08:22.391519] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222853 ] 00:05:07.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.288 [2024-07-15 23:08:22.454029] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.288 [2024-07-15 23:08:22.570280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.288 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:08.659 ====================================== 00:05:08.659 busy:2712578325 (cyc) 00:05:08.659 total_run_count: 292000 00:05:08.659 tsc_hz: 2700000000 (cyc) 00:05:08.659 ====================================== 00:05:08.659 poller_cost: 9289 (cyc), 3440 (nsec) 00:05:08.659 00:05:08.659 real 0m1.327s 00:05:08.659 user 0m1.243s 00:05:08.659 sys 0m0.078s 00:05:08.659 23:08:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.659 23:08:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.659 ************************************ 00:05:08.659 END TEST thread_poller_perf 00:05:08.659 ************************************ 00:05:08.659 23:08:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:08.659 23:08:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.659 23:08:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:08.659 23:08:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.659 23:08:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.659 ************************************ 00:05:08.659 START TEST thread_poller_perf 00:05:08.659 ************************************ 00:05:08.659 23:08:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.659 [2024-07-15 23:08:23.767646] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:08.659 [2024-07-15 23:08:23.767719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223019 ] 00:05:08.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.659 [2024-07-15 23:08:23.833859] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.659 [2024-07-15 23:08:23.953783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.659 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.030 ====================================== 00:05:10.030 busy:2703038263 (cyc) 00:05:10.030 total_run_count: 3858000 00:05:10.030 tsc_hz: 2700000000 (cyc) 00:05:10.030 ====================================== 00:05:10.030 poller_cost: 700 (cyc), 259 (nsec) 00:05:10.030 00:05:10.030 real 0m1.326s 00:05:10.030 user 0m1.236s 00:05:10.030 sys 0m0.083s 00:05:10.030 23:08:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.030 23:08:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.030 ************************************ 00:05:10.030 END TEST thread_poller_perf 00:05:10.030 ************************************ 00:05:10.030 23:08:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:10.030 23:08:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:10.030 00:05:10.030 real 0m2.803s 00:05:10.030 user 0m2.537s 00:05:10.030 sys 0m0.266s 00:05:10.030 23:08:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.030 23:08:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.030 ************************************ 00:05:10.030 END TEST thread 00:05:10.030 ************************************ 00:05:10.030 23:08:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.030 23:08:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:10.030 23:08:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.030 23:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.030 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:10.030 ************************************ 00:05:10.030 START TEST accel 00:05:10.030 ************************************ 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:10.030 * Looking for test storage... 00:05:10.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:10.030 23:08:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:10.030 23:08:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:10.030 23:08:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:10.030 23:08:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2223315 00:05:10.030 23:08:25 accel -- accel/accel.sh@63 -- # waitforlisten 2223315 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@829 -- # '[' -z 2223315 ']' 00:05:10.030 23:08:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.030 23:08:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.030 23:08:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.030 23:08:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.030 23:08:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.030 23:08:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.030 23:08:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.030 23:08:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.030 23:08:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:10.030 23:08:25 accel -- accel/accel.sh@41 -- # jq -r . 00:05:10.030 [2024-07-15 23:08:25.255583] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:10.030 [2024-07-15 23:08:25.255656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223315 ] 00:05:10.030 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.030 [2024-07-15 23:08:25.313184] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.287 [2024-07-15 23:08:25.427800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@862 -- # return 0 00:05:10.545 23:08:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:10.545 23:08:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:10.545 23:08:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:10.545 23:08:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:10.545 23:08:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:10.545 23:08:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:10.545 23:08:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # IFS== 00:05:10.545 23:08:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:10.545 23:08:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:10.545 23:08:25 accel -- accel/accel.sh@75 -- # killprocess 2223315 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@948 -- # '[' -z 2223315 ']' 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@952 -- # kill -0 2223315 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@953 -- # uname 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223315 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223315' 00:05:10.545 killing process with pid 2223315 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@967 -- # kill 2223315 00:05:10.545 23:08:25 accel -- common/autotest_common.sh@972 -- # wait 2223315 00:05:11.109 23:08:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:11.109 23:08:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.109 23:08:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:11.109 23:08:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:11.109 23:08:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.109 23:08:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.109 23:08:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.109 23:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.109 ************************************ 00:05:11.109 START TEST accel_missing_filename 00:05:11.109 ************************************ 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.109 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:11.109 23:08:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:11.109 [2024-07-15 23:08:26.357422] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:11.109 [2024-07-15 23:08:26.357491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223485 ] 00:05:11.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.109 [2024-07-15 23:08:26.420537] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.407 [2024-07-15 23:08:26.541547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.407 [2024-07-15 23:08:26.604713] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.407 [2024-07-15 23:08:26.686779] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:11.663 A filename is required. 00:05:11.663 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:11.663 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.663 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:11.663 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:11.663 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:11.664 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.664 00:05:11.664 real 0m0.467s 00:05:11.664 user 0m0.359s 00:05:11.664 sys 0m0.142s 00:05:11.664 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.664 23:08:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:11.664 ************************************ 00:05:11.664 END TEST accel_missing_filename 00:05:11.664 ************************************ 00:05:11.664 23:08:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.664 23:08:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.664 23:08:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:11.664 23:08:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.664 23:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.664 ************************************ 00:05:11.664 START TEST accel_compress_verify 00:05:11.664 ************************************ 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.664 23:08:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:11.664 23:08:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:11.664 [2024-07-15 23:08:26.865370] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:11.664 [2024-07-15 23:08:26.865427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223521 ] 00:05:11.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.664 [2024-07-15 23:08:26.927374] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.921 [2024-07-15 23:08:27.048507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.921 [2024-07-15 23:08:27.111848] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.921 [2024-07-15 23:08:27.191809] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:12.179 00:05:12.179 Compression does not support the verify option, aborting. 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.179 00:05:12.179 real 0m0.467s 00:05:12.179 user 0m0.357s 00:05:12.179 sys 0m0.144s 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.179 23:08:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:12.179 ************************************ 00:05:12.179 END TEST accel_compress_verify 00:05:12.179 ************************************ 00:05:12.179 23:08:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.179 23:08:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:12.179 23:08:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:12.179 23:08:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.179 23:08:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.179 ************************************ 00:05:12.179 START TEST accel_wrong_workload 00:05:12.179 ************************************ 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:12.179 23:08:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:12.179 Unsupported workload type: foobar 00:05:12.179 [2024-07-15 23:08:27.379426] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:12.179 accel_perf options: 00:05:12.179 [-h help message] 00:05:12.179 [-q queue depth per core] 00:05:12.179 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:12.179 [-T number of threads per core 00:05:12.179 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:12.179 [-t time in seconds] 00:05:12.179 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:12.179 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:12.179 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:12.179 [-l for compress/decompress workloads, name of uncompressed input file 00:05:12.179 [-S for crc32c workload, use this seed value (default 0) 00:05:12.179 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:12.179 [-f for fill workload, use this BYTE value (default 255) 00:05:12.179 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:12.179 [-y verify result if this switch is on] 00:05:12.179 [-a tasks to allocate per core (default: same value as -q)] 00:05:12.179 Can be used to spread operations across a wider range of memory. 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.179 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.179 00:05:12.179 real 0m0.023s 00:05:12.179 user 0m0.012s 00:05:12.179 sys 0m0.010s 00:05:12.180 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.180 23:08:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 ************************************ 00:05:12.180 END TEST accel_wrong_workload 00:05:12.180 ************************************ 00:05:12.180 Error: writing output failed: Broken pipe 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.180 23:08:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 ************************************ 00:05:12.180 START TEST accel_negative_buffers 00:05:12.180 ************************************ 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:12.180 23:08:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:12.180 -x option must be non-negative. 00:05:12.180 [2024-07-15 23:08:27.444837] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:12.180 accel_perf options: 00:05:12.180 [-h help message] 00:05:12.180 [-q queue depth per core] 00:05:12.180 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:12.180 [-T number of threads per core 00:05:12.180 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:12.180 [-t time in seconds] 00:05:12.180 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:12.180 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:12.180 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:12.180 [-l for compress/decompress workloads, name of uncompressed input file 00:05:12.180 [-S for crc32c workload, use this seed value (default 0) 00:05:12.180 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:12.180 [-f for fill workload, use this BYTE value (default 255) 00:05:12.180 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:12.180 [-y verify result if this switch is on] 00:05:12.180 [-a tasks to allocate per core (default: same value as -q)] 00:05:12.180 Can be used to spread operations across a wider range of memory. 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.180 00:05:12.180 real 0m0.021s 00:05:12.180 user 0m0.013s 00:05:12.180 sys 0m0.009s 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.180 23:08:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 ************************************ 00:05:12.180 END TEST accel_negative_buffers 00:05:12.180 ************************************ 00:05:12.180 Error: writing output failed: Broken pipe 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.180 23:08:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.180 23:08:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.437 ************************************ 00:05:12.437 START TEST accel_crc32c 00:05:12.437 ************************************ 00:05:12.437 23:08:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:12.437 23:08:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:12.437 [2024-07-15 23:08:27.514338] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:12.437 [2024-07-15 23:08:27.514402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223702 ] 00:05:12.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.437 [2024-07-15 23:08:27.578972] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.437 [2024-07-15 23:08:27.699483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.694 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:12.695 23:08:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:14.065 23:08:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.065 00:05:14.065 real 0m1.489s 00:05:14.065 user 0m1.340s 00:05:14.065 sys 0m0.157s 00:05:14.065 23:08:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.066 23:08:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:14.066 ************************************ 00:05:14.066 END TEST accel_crc32c 00:05:14.066 ************************************ 00:05:14.066 23:08:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.066 23:08:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:14.066 23:08:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:14.066 23:08:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.066 23:08:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.066 ************************************ 00:05:14.066 START TEST accel_crc32c_C2 00:05:14.066 ************************************ 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:14.066 [2024-07-15 23:08:29.044590] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:14.066 [2024-07-15 23:08:29.044655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223858 ] 00:05:14.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.066 [2024-07-15 23:08:29.106312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.066 [2024-07-15 23:08:29.223007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 23:08:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.438 00:05:15.438 real 0m1.474s 00:05:15.438 user 0m1.328s 00:05:15.438 sys 0m0.148s 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.438 23:08:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:15.438 ************************************ 00:05:15.438 END TEST accel_crc32c_C2 00:05:15.438 ************************************ 00:05:15.438 23:08:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.439 23:08:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:15.439 23:08:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:15.439 23:08:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.439 23:08:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.439 ************************************ 00:05:15.439 START TEST accel_copy 00:05:15.439 ************************************ 00:05:15.439 23:08:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:15.439 23:08:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:15.439 [2024-07-15 23:08:30.563800] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:15.439 [2024-07-15 23:08:30.563859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224022 ] 00:05:15.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.439 [2024-07-15 23:08:30.627363] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.439 [2024-07-15 23:08:30.746024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.697 23:08:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:17.069 23:08:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.069 00:05:17.069 real 0m1.474s 00:05:17.069 user 0m1.330s 00:05:17.069 sys 0m0.146s 00:05:17.069 23:08:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.069 23:08:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:17.069 ************************************ 00:05:17.069 END TEST accel_copy 00:05:17.069 ************************************ 00:05:17.069 23:08:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.069 23:08:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:17.069 23:08:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:17.069 23:08:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.069 23:08:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.069 ************************************ 00:05:17.069 START TEST accel_fill 00:05:17.069 ************************************ 00:05:17.069 23:08:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:17.069 23:08:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:17.070 [2024-07-15 23:08:32.078956] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:17.070 [2024-07-15 23:08:32.079015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224290 ] 00:05:17.070 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.070 [2024-07-15 23:08:32.142159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.070 [2024-07-15 23:08:32.262020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:17.070 23:08:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:18.440 23:08:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.440 00:05:18.440 real 0m1.481s 00:05:18.440 user 0m1.331s 00:05:18.440 sys 0m0.152s 00:05:18.440 23:08:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.440 23:08:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:18.440 ************************************ 00:05:18.440 END TEST accel_fill 00:05:18.440 ************************************ 00:05:18.440 23:08:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.440 23:08:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:18.440 23:08:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.440 23:08:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.440 23:08:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.440 ************************************ 00:05:18.440 START TEST accel_copy_crc32c 00:05:18.440 ************************************ 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:18.440 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:18.440 [2024-07-15 23:08:33.606488] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:18.440 [2024-07-15 23:08:33.606551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224449 ] 00:05:18.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.440 [2024-07-15 23:08:33.669003] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.698 [2024-07-15 23:08:33.786776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 23:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:20.067 23:08:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.068 00:05:20.068 real 0m1.474s 00:05:20.068 user 0m1.323s 00:05:20.068 sys 0m0.153s 00:05:20.068 23:08:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.068 23:08:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:20.068 ************************************ 00:05:20.068 END TEST accel_copy_crc32c 00:05:20.068 ************************************ 00:05:20.068 23:08:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.068 23:08:35 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:20.068 23:08:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:20.068 23:08:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.068 23:08:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.068 ************************************ 00:05:20.068 START TEST accel_copy_crc32c_C2 00:05:20.068 ************************************ 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:20.068 [2024-07-15 23:08:35.124556] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:20.068 [2024-07-15 23:08:35.124621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224610 ] 00:05:20.068 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.068 [2024-07-15 23:08:35.188642] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.068 [2024-07-15 23:08:35.306909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.068 23:08:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.439 00:05:21.439 real 0m1.480s 00:05:21.439 user 0m1.335s 00:05:21.439 sys 0m0.147s 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.439 23:08:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:21.439 ************************************ 00:05:21.439 END TEST accel_copy_crc32c_C2 00:05:21.439 ************************************ 00:05:21.439 23:08:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.439 23:08:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:21.439 23:08:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:21.439 23:08:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.439 23:08:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.439 ************************************ 00:05:21.439 START TEST accel_dualcast 00:05:21.439 ************************************ 00:05:21.439 23:08:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:21.439 23:08:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:21.439 23:08:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:21.440 23:08:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:21.440 [2024-07-15 23:08:36.649817] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:21.440 [2024-07-15 23:08:36.649878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224882 ] 00:05:21.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.440 [2024-07-15 23:08:36.713876] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.698 [2024-07-15 23:08:36.832753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:21.698 23:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:23.071 23:08:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.071 00:05:23.071 real 0m1.483s 00:05:23.071 user 0m1.336s 00:05:23.071 sys 0m0.148s 00:05:23.071 23:08:38 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.071 23:08:38 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:23.071 ************************************ 00:05:23.071 END TEST accel_dualcast 00:05:23.071 ************************************ 00:05:23.071 23:08:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.071 23:08:38 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:23.071 23:08:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.071 23:08:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.071 23:08:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.071 ************************************ 00:05:23.071 START TEST accel_compare 00:05:23.071 ************************************ 00:05:23.071 23:08:38 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:23.071 23:08:38 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:23.071 [2024-07-15 23:08:38.177431] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:23.071 [2024-07-15 23:08:38.177494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225035 ] 00:05:23.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.071 [2024-07-15 23:08:38.242236] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.071 [2024-07-15 23:08:38.359540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.329 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:23.330 23:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:24.702 23:08:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.702 00:05:24.702 real 0m1.467s 00:05:24.702 user 0m1.327s 00:05:24.702 sys 0m0.142s 00:05:24.702 23:08:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.702 23:08:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:24.702 ************************************ 00:05:24.702 END TEST accel_compare 00:05:24.702 ************************************ 00:05:24.702 23:08:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.702 23:08:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:24.702 23:08:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:24.702 23:08:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.702 23:08:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.702 ************************************ 00:05:24.702 START TEST accel_xor 00:05:24.702 ************************************ 00:05:24.702 23:08:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:24.702 [2024-07-15 23:08:39.693089] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:24.702 [2024-07-15 23:08:39.693154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225203 ] 00:05:24.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.702 [2024-07-15 23:08:39.759473] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.702 [2024-07-15 23:08:39.879946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.702 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:24.703 23:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.075 00:05:26.075 real 0m1.492s 00:05:26.075 user 0m1.340s 00:05:26.075 sys 0m0.154s 00:05:26.075 23:08:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.075 23:08:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:26.075 ************************************ 00:05:26.075 END TEST accel_xor 00:05:26.075 ************************************ 00:05:26.075 23:08:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.075 23:08:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:26.075 23:08:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:26.075 23:08:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.075 23:08:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.075 ************************************ 00:05:26.075 START TEST accel_xor 00:05:26.075 ************************************ 00:05:26.075 23:08:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:26.075 23:08:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:26.075 [2024-07-15 23:08:41.231858] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:26.075 [2024-07-15 23:08:41.231924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225471 ] 00:05:26.075 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.075 [2024-07-15 23:08:41.297469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.335 [2024-07-15 23:08:41.422235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:26.335 23:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.705 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:27.706 23:08:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.706 00:05:27.706 real 0m1.495s 00:05:27.706 user 0m1.350s 00:05:27.706 sys 0m0.147s 00:05:27.706 23:08:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.706 23:08:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:27.706 ************************************ 00:05:27.706 END TEST accel_xor 00:05:27.706 ************************************ 00:05:27.706 23:08:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.706 23:08:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:27.706 23:08:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:27.706 23:08:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.706 23:08:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.706 ************************************ 00:05:27.706 START TEST accel_dif_verify 00:05:27.706 ************************************ 00:05:27.706 23:08:42 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:27.706 23:08:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:27.706 [2024-07-15 23:08:42.771653] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:27.706 [2024-07-15 23:08:42.771718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225630 ] 00:05:27.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.706 [2024-07-15 23:08:42.833389] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.706 [2024-07-15 23:08:42.944296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:27.706 23:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:29.129 23:08:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.129 00:05:29.129 real 0m1.473s 00:05:29.129 user 0m1.336s 00:05:29.129 sys 0m0.141s 00:05:29.129 23:08:44 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.129 23:08:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:29.129 ************************************ 00:05:29.129 END TEST accel_dif_verify 00:05:29.129 ************************************ 00:05:29.129 23:08:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.129 23:08:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:29.129 23:08:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:29.129 23:08:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.129 23:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.129 ************************************ 00:05:29.129 START TEST accel_dif_generate 00:05:29.129 ************************************ 00:05:29.129 23:08:44 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:29.129 23:08:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:29.129 [2024-07-15 23:08:44.287961] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:29.129 [2024-07-15 23:08:44.288033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225818 ] 00:05:29.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.129 [2024-07-15 23:08:44.350967] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.387 [2024-07-15 23:08:44.474588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.387 23:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:30.759 23:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.759 00:05:30.759 real 0m1.491s 00:05:30.759 user 0m1.347s 00:05:30.759 sys 0m0.148s 00:05:30.759 23:08:45 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.759 23:08:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:30.759 ************************************ 00:05:30.759 END TEST accel_dif_generate 00:05:30.759 ************************************ 00:05:30.759 23:08:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.759 23:08:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:30.759 23:08:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:30.759 23:08:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.759 23:08:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.759 ************************************ 00:05:30.759 START TEST accel_dif_generate_copy 00:05:30.759 ************************************ 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:30.759 23:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:30.759 [2024-07-15 23:08:45.824030] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:30.759 [2024-07-15 23:08:45.824105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226067 ] 00:05:30.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.759 [2024-07-15 23:08:45.886919] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.759 [2024-07-15 23:08:46.010228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.017 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.018 23:08:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.392 00:05:32.392 real 0m1.491s 00:05:32.392 user 0m1.339s 00:05:32.392 sys 0m0.155s 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.392 23:08:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:32.392 ************************************ 00:05:32.392 END TEST accel_dif_generate_copy 00:05:32.392 ************************************ 00:05:32.392 23:08:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.392 23:08:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:32.392 23:08:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.392 23:08:47 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:32.392 23:08:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.392 23:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.392 ************************************ 00:05:32.392 START TEST accel_comp 00:05:32.392 ************************************ 00:05:32.392 23:08:47 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:32.392 [2024-07-15 23:08:47.364492] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:32.392 [2024-07-15 23:08:47.364555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226222 ] 00:05:32.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.392 [2024-07-15 23:08:47.429178] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.392 [2024-07-15 23:08:47.551605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.392 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.393 23:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:33.767 23:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.767 00:05:33.767 real 0m1.474s 00:05:33.767 user 0m1.336s 00:05:33.767 sys 0m0.142s 00:05:33.767 23:08:48 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.767 23:08:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 ************************************ 00:05:33.767 END TEST accel_comp 00:05:33.767 ************************************ 00:05:33.767 23:08:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.767 23:08:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.767 23:08:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.767 23:08:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.767 23:08:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.767 ************************************ 00:05:33.767 START TEST accel_decomp 00:05:33.767 ************************************ 00:05:33.767 23:08:48 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:33.767 23:08:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:33.767 [2024-07-15 23:08:48.886605] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:33.767 [2024-07-15 23:08:48.886672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226474 ] 00:05:33.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.767 [2024-07-15 23:08:48.951719] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.767 [2024-07-15 23:08:49.074153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.026 23:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.400 23:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.400 00:05:35.400 real 0m1.497s 00:05:35.400 user 0m1.350s 00:05:35.400 sys 0m0.150s 00:05:35.400 23:08:50 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.400 23:08:50 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 END TEST accel_decomp 00:05:35.400 ************************************ 00:05:35.400 23:08:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.400 23:08:50 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.400 23:08:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:35.400 23:08:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.400 23:08:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 START TEST accel_decomp_full 00:05:35.400 ************************************ 00:05:35.400 23:08:50 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:35.400 [2024-07-15 23:08:50.432332] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:35.400 [2024-07-15 23:08:50.432398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226655 ] 00:05:35.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.400 [2024-07-15 23:08:50.495802] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.400 [2024-07-15 23:08:50.618892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.400 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.401 23:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.771 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.771 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.772 23:08:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.772 00:05:36.772 real 0m1.502s 00:05:36.772 user 0m1.352s 00:05:36.772 sys 0m0.153s 00:05:36.772 23:08:51 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.772 23:08:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:36.772 ************************************ 00:05:36.772 END TEST accel_decomp_full 00:05:36.772 ************************************ 00:05:36.772 23:08:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.772 23:08:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:36.772 23:08:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:36.772 23:08:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.772 23:08:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.772 ************************************ 00:05:36.772 START TEST accel_decomp_mcore 00:05:36.772 ************************************ 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:36.772 23:08:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:36.772 [2024-07-15 23:08:51.979641] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:36.772 [2024-07-15 23:08:51.979707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226814 ] 00:05:36.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.772 [2024-07-15 23:08:52.044657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.030 [2024-07-15 23:08:52.173878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.030 [2024-07-15 23:08:52.173933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.030 [2024-07-15 23:08:52.173985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.030 [2024-07-15 23:08:52.173989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.030 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.031 23:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.403 00:05:38.403 real 0m1.497s 00:05:38.403 user 0m0.013s 00:05:38.403 sys 0m0.002s 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.403 23:08:53 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:38.403 ************************************ 00:05:38.403 END TEST accel_decomp_mcore 00:05:38.403 ************************************ 00:05:38.403 23:08:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.403 23:08:53 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.403 23:08:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:38.403 23:08:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.403 23:08:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.403 ************************************ 00:05:38.403 START TEST accel_decomp_full_mcore 00:05:38.403 ************************************ 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:38.403 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:38.403 [2024-07-15 23:08:53.526746] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:38.403 [2024-07-15 23:08:53.526824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227093 ] 00:05:38.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.403 [2024-07-15 23:08:53.591197] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.403 [2024-07-15 23:08:53.717760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.661 [2024-07-15 23:08:53.717801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.661 [2024-07-15 23:08:53.717827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.661 [2024-07-15 23:08:53.717831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.661 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.661 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.662 23:08:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.035 00:05:40.035 real 0m1.517s 00:05:40.035 user 0m4.866s 00:05:40.035 sys 0m0.162s 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.035 23:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:40.035 ************************************ 00:05:40.035 END TEST accel_decomp_full_mcore 00:05:40.035 ************************************ 00:05:40.035 23:08:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.035 23:08:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.035 23:08:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:40.035 23:08:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.035 23:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.035 ************************************ 00:05:40.035 START TEST accel_decomp_mthread 00:05:40.035 ************************************ 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:40.035 [2024-07-15 23:08:55.093031] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:40.035 [2024-07-15 23:08:55.093096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227253 ] 00:05:40.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.035 [2024-07-15 23:08:55.156804] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.035 [2024-07-15 23:08:55.280283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.035 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.293 23:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.665 00:05:41.665 real 0m1.502s 00:05:41.665 user 0m1.346s 00:05:41.665 sys 0m0.160s 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.665 23:08:56 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:41.665 ************************************ 00:05:41.665 END TEST accel_decomp_mthread 00:05:41.665 ************************************ 00:05:41.665 23:08:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.665 23:08:56 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:41.665 23:08:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:41.665 23:08:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.665 23:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.665 ************************************ 00:05:41.665 START TEST accel_decomp_full_mthread 00:05:41.665 ************************************ 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:41.665 [2024-07-15 23:08:56.644404] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:41.665 [2024-07-15 23:08:56.644473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227408 ] 00:05:41.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.665 [2024-07-15 23:08:56.707769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.665 [2024-07-15 23:08:56.830099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.665 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:41.666 23:08:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.036 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.037 00:05:43.037 real 0m1.515s 00:05:43.037 user 0m1.371s 00:05:43.037 sys 0m0.147s 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.037 23:08:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:43.037 ************************************ 00:05:43.037 END TEST accel_decomp_full_mthread 00:05:43.037 ************************************ 00:05:43.037 23:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.037 23:08:58 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:43.037 23:08:58 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:43.037 23:08:58 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:43.037 23:08:58 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:43.037 23:08:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.037 23:08:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.037 23:08:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.037 23:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.037 23:08:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.037 23:08:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.037 23:08:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.037 23:08:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:43.037 23:08:58 accel -- accel/accel.sh@41 -- # jq -r . 00:05:43.037 ************************************ 00:05:43.037 START TEST accel_dif_functional_tests 00:05:43.037 ************************************ 00:05:43.037 23:08:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:43.037 [2024-07-15 23:08:58.229084] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:43.037 [2024-07-15 23:08:58.229158] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227688 ] 00:05:43.037 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.037 [2024-07-15 23:08:58.296087] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.296 [2024-07-15 23:08:58.422771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.296 [2024-07-15 23:08:58.422801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.296 [2024-07-15 23:08:58.422805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.296 00:05:43.296 00:05:43.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.296 http://cunit.sourceforge.net/ 00:05:43.296 00:05:43.296 00:05:43.296 Suite: accel_dif 00:05:43.296 Test: verify: DIF generated, GUARD check ...passed 00:05:43.296 Test: verify: DIF generated, APPTAG check ...passed 00:05:43.296 Test: verify: DIF generated, REFTAG check ...passed 00:05:43.296 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:08:58.526711] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:43.296 passed 00:05:43.296 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:08:58.526805] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:43.296 passed 00:05:43.296 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:08:58.526853] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:43.296 passed 00:05:43.296 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:43.296 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:08:58.526929] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:43.296 passed 00:05:43.296 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:43.296 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:43.296 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:43.296 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:08:58.527085] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:43.296 passed 00:05:43.296 Test: verify copy: DIF generated, GUARD check ...passed 00:05:43.296 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:43.296 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:43.296 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:08:58.527277] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:43.296 passed 00:05:43.296 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:08:58.527320] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:43.296 passed 00:05:43.296 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:08:58.527360] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:43.296 passed 00:05:43.296 Test: generate copy: DIF generated, GUARD check ...passed 00:05:43.296 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:43.296 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:43.296 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:43.296 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:43.296 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:43.296 Test: generate copy: iovecs-len validate ...[2024-07-15 23:08:58.527619] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:43.296 passed 00:05:43.296 Test: generate copy: buffer alignment validate ...passed 00:05:43.296 00:05:43.296 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.296 suites 1 1 n/a 0 0 00:05:43.296 tests 26 26 26 0 0 00:05:43.296 asserts 115 115 115 0 n/a 00:05:43.296 00:05:43.296 Elapsed time = 0.003 seconds 00:05:43.554 00:05:43.554 real 0m0.611s 00:05:43.554 user 0m0.919s 00:05:43.554 sys 0m0.189s 00:05:43.554 23:08:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.554 23:08:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 ************************************ 00:05:43.554 END TEST accel_dif_functional_tests 00:05:43.554 ************************************ 00:05:43.554 23:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.554 00:05:43.554 real 0m33.671s 00:05:43.554 user 0m37.037s 00:05:43.554 sys 0m4.702s 00:05:43.554 23:08:58 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.554 23:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 ************************************ 00:05:43.554 END TEST accel 00:05:43.554 ************************************ 00:05:43.554 23:08:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.554 23:08:58 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:43.554 23:08:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.554 23:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.554 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.554 ************************************ 00:05:43.554 START TEST accel_rpc 00:05:43.554 ************************************ 00:05:43.554 23:08:58 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:43.811 * Looking for test storage... 00:05:43.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:43.811 23:08:58 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.811 23:08:58 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2227758 00:05:43.811 23:08:58 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:43.811 23:08:58 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2227758 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2227758 ']' 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.811 23:08:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.811 [2024-07-15 23:08:58.967143] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:43.811 [2024-07-15 23:08:58.967227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227758 ] 00:05:43.811 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.811 [2024-07-15 23:08:59.033323] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.068 [2024-07-15 23:08:59.156854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.999 23:08:59 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.999 23:08:59 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.999 23:08:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:44.999 23:08:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:44.999 23:08:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:44.999 23:08:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:44.999 23:08:59 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:44.999 23:08:59 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.999 23:08:59 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.999 23:08:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 ************************************ 00:05:45.000 START TEST accel_assign_opcode 00:05:45.000 ************************************ 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 [2024-07-15 23:08:59.991419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.000 23:08:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 [2024-07-15 23:08:59.999432] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.000 software 00:05:45.000 00:05:45.000 real 0m0.297s 00:05:45.000 user 0m0.039s 00:05:45.000 sys 0m0.005s 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.000 23:09:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.000 ************************************ 00:05:45.000 END TEST accel_assign_opcode 00:05:45.000 ************************************ 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.000 23:09:00 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2227758 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2227758 ']' 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2227758 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.000 23:09:00 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227758 00:05:45.256 23:09:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.256 23:09:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.256 23:09:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227758' 00:05:45.256 killing process with pid 2227758 00:05:45.256 23:09:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 2227758 00:05:45.256 23:09:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 2227758 00:05:45.514 00:05:45.514 real 0m1.923s 00:05:45.514 user 0m2.072s 00:05:45.514 sys 0m0.493s 00:05:45.514 23:09:00 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.514 23:09:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.514 ************************************ 00:05:45.514 END TEST accel_rpc 00:05:45.514 ************************************ 00:05:45.514 23:09:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.514 23:09:00 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:45.514 23:09:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.514 23:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.514 23:09:00 -- common/autotest_common.sh@10 -- # set +x 00:05:45.771 ************************************ 00:05:45.771 START TEST app_cmdline 00:05:45.771 ************************************ 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:45.771 * Looking for test storage... 00:05:45.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:45.771 23:09:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:45.771 23:09:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2228148 00:05:45.771 23:09:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:45.771 23:09:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2228148 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2228148 ']' 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.771 23:09:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.771 [2024-07-15 23:09:00.941999] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:05:45.771 [2024-07-15 23:09:00.942139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228148 ] 00:05:45.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.771 [2024-07-15 23:09:01.001232] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.028 [2024-07-15 23:09:01.113412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.285 23:09:01 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.285 23:09:01 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:46.285 23:09:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:46.542 { 00:05:46.542 "version": "SPDK v24.09-pre git sha1 c1860effd", 00:05:46.542 "fields": { 00:05:46.542 "major": 24, 00:05:46.542 "minor": 9, 00:05:46.542 "patch": 0, 00:05:46.542 "suffix": "-pre", 00:05:46.542 "commit": "c1860effd" 00:05:46.542 } 00:05:46.542 } 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:46.542 23:09:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:46.542 23:09:01 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.799 request: 00:05:46.799 { 00:05:46.799 "method": "env_dpdk_get_mem_stats", 00:05:46.799 "req_id": 1 00:05:46.799 } 00:05:46.799 Got JSON-RPC error response 00:05:46.799 response: 00:05:46.799 { 00:05:46.799 "code": -32601, 00:05:46.799 "message": "Method not found" 00:05:46.799 } 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.799 23:09:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2228148 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2228148 ']' 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2228148 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2228148 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2228148' 00:05:46.799 killing process with pid 2228148 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@967 -- # kill 2228148 00:05:46.799 23:09:01 app_cmdline -- common/autotest_common.sh@972 -- # wait 2228148 00:05:47.365 00:05:47.365 real 0m1.566s 00:05:47.365 user 0m1.857s 00:05:47.365 sys 0m0.481s 00:05:47.365 23:09:02 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.365 23:09:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 ************************************ 00:05:47.365 END TEST app_cmdline 00:05:47.365 ************************************ 00:05:47.365 23:09:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.365 23:09:02 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.365 23:09:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.365 23:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.366 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.366 ************************************ 00:05:47.366 START TEST version 00:05:47.366 ************************************ 00:05:47.366 23:09:02 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.366 * Looking for test storage... 00:05:47.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:47.366 23:09:02 version -- app/version.sh@17 -- # get_header_version major 00:05:47.366 23:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.366 23:09:02 version -- app/version.sh@17 -- # major=24 00:05:47.366 23:09:02 version -- app/version.sh@18 -- # get_header_version minor 00:05:47.366 23:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.366 23:09:02 version -- app/version.sh@18 -- # minor=9 00:05:47.366 23:09:02 version -- app/version.sh@19 -- # get_header_version patch 00:05:47.366 23:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.366 23:09:02 version -- app/version.sh@19 -- # patch=0 00:05:47.366 23:09:02 version -- app/version.sh@20 -- # get_header_version suffix 00:05:47.366 23:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:47.366 23:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.366 23:09:02 version -- app/version.sh@20 -- # suffix=-pre 00:05:47.366 23:09:02 version -- app/version.sh@22 -- # version=24.9 00:05:47.366 23:09:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:47.366 23:09:02 version -- app/version.sh@28 -- # version=24.9rc0 00:05:47.366 23:09:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:47.366 23:09:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:47.366 23:09:02 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:47.366 23:09:02 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:47.366 00:05:47.366 real 0m0.110s 00:05:47.366 user 0m0.065s 00:05:47.366 sys 0m0.067s 00:05:47.366 23:09:02 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.366 23:09:02 version -- common/autotest_common.sh@10 -- # set +x 00:05:47.366 ************************************ 00:05:47.366 END TEST version 00:05:47.366 ************************************ 00:05:47.366 23:09:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.366 23:09:02 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@198 -- # uname -s 00:05:47.366 23:09:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:47.366 23:09:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:47.366 23:09:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:47.366 23:09:02 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:47.366 23:09:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.366 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.366 23:09:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:47.366 23:09:02 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:47.366 23:09:02 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:47.366 23:09:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:47.366 23:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.366 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.366 ************************************ 00:05:47.366 START TEST nvmf_tcp 00:05:47.366 ************************************ 00:05:47.366 23:09:02 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:47.366 * Looking for test storage... 00:05:47.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.626 23:09:02 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.626 23:09:02 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.626 23:09:02 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.626 23:09:02 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.626 23:09:02 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.626 23:09:02 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.626 23:09:02 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:47.626 23:09:02 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:47.626 23:09:02 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.626 23:09:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:47.626 23:09:02 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:47.626 23:09:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:47.626 23:09:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.626 23:09:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.626 ************************************ 00:05:47.627 START TEST nvmf_example 00:05:47.627 ************************************ 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:47.627 * Looking for test storage... 00:05:47.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:47.627 23:09:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:05:50.154 Found 0000:84:00.0 (0x8086 - 0x159b) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:05:50.154 Found 0000:84:00.1 (0x8086 - 0x159b) 00:05:50.154 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:05:50.155 Found net devices under 0000:84:00.0: cvl_0_0 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:05:50.155 Found net devices under 0000:84:00.1: cvl_0_1 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.155 23:09:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:50.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:05:50.155 00:05:50.155 --- 10.0.0.2 ping statistics --- 00:05:50.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.155 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:05:50.155 00:05:50.155 --- 10.0.0.1 ping statistics --- 00:05:50.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.155 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2230260 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2230260 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2230260 ']' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:50.155 23:09:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:50.155 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.439 Initializing NVMe Controllers 00:06:02.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:02.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:02.440 Initialization complete. Launching workers. 00:06:02.440 ======================================================== 00:06:02.440 Latency(us) 00:06:02.440 Device Information : IOPS MiB/s Average min max 00:06:02.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14730.99 57.54 4347.15 789.19 15883.50 00:06:02.440 ======================================================== 00:06:02.440 Total : 14730.99 57.54 4347.15 789.19 15883.50 00:06:02.440 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:02.440 rmmod nvme_tcp 00:06:02.440 rmmod nvme_fabrics 00:06:02.440 rmmod nvme_keyring 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2230260 ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2230260 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2230260 ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2230260 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230260 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230260' 00:06:02.440 killing process with pid 2230260 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2230260 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2230260 00:06:02.440 nvmf threads initialize successfully 00:06:02.440 bdev subsystem init successfully 00:06:02.440 created a nvmf target service 00:06:02.440 create targets's poll groups done 00:06:02.440 all subsystems of target started 00:06:02.440 nvmf target is running 00:06:02.440 all subsystems of target stopped 00:06:02.440 destroy targets's poll groups done 00:06:02.440 destroyed the nvmf target service 00:06:02.440 bdev subsystem finish successfully 00:06:02.440 nvmf threads destroy successfully 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:02.440 23:09:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:02.699 00:06:02.699 real 0m15.269s 00:06:02.699 user 0m41.713s 00:06:02.699 sys 0m3.594s 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.699 23:09:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:02.699 ************************************ 00:06:02.699 END TEST nvmf_example 00:06:02.699 ************************************ 00:06:02.959 23:09:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:02.959 23:09:18 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:02.959 23:09:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.959 23:09:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.959 23:09:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.959 ************************************ 00:06:02.959 START TEST nvmf_filesystem 00:06:02.959 ************************************ 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:02.959 * Looking for test storage... 00:06:02.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:02.959 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:02.960 #define SPDK_CONFIG_H 00:06:02.960 #define SPDK_CONFIG_APPS 1 00:06:02.960 #define SPDK_CONFIG_ARCH native 00:06:02.960 #undef SPDK_CONFIG_ASAN 00:06:02.960 #undef SPDK_CONFIG_AVAHI 00:06:02.960 #undef SPDK_CONFIG_CET 00:06:02.960 #define SPDK_CONFIG_COVERAGE 1 00:06:02.960 #define SPDK_CONFIG_CROSS_PREFIX 00:06:02.960 #undef SPDK_CONFIG_CRYPTO 00:06:02.960 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:02.960 #undef SPDK_CONFIG_CUSTOMOCF 00:06:02.960 #undef SPDK_CONFIG_DAOS 00:06:02.960 #define SPDK_CONFIG_DAOS_DIR 00:06:02.960 #define SPDK_CONFIG_DEBUG 1 00:06:02.960 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:02.960 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:02.960 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:02.960 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:02.960 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:02.960 #undef SPDK_CONFIG_DPDK_UADK 00:06:02.960 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:02.960 #define SPDK_CONFIG_EXAMPLES 1 00:06:02.960 #undef SPDK_CONFIG_FC 00:06:02.960 #define SPDK_CONFIG_FC_PATH 00:06:02.960 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:02.960 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:02.960 #undef SPDK_CONFIG_FUSE 00:06:02.960 #undef SPDK_CONFIG_FUZZER 00:06:02.960 #define SPDK_CONFIG_FUZZER_LIB 00:06:02.960 #undef SPDK_CONFIG_GOLANG 00:06:02.960 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:02.960 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:02.960 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:02.960 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:02.960 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:02.960 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:02.960 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:02.960 #define SPDK_CONFIG_IDXD 1 00:06:02.960 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:02.960 #undef SPDK_CONFIG_IPSEC_MB 00:06:02.960 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:02.960 #define SPDK_CONFIG_ISAL 1 00:06:02.960 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:02.960 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:02.960 #define SPDK_CONFIG_LIBDIR 00:06:02.960 #undef SPDK_CONFIG_LTO 00:06:02.960 #define SPDK_CONFIG_MAX_LCORES 128 00:06:02.960 #define SPDK_CONFIG_NVME_CUSE 1 00:06:02.960 #undef SPDK_CONFIG_OCF 00:06:02.960 #define SPDK_CONFIG_OCF_PATH 00:06:02.960 #define SPDK_CONFIG_OPENSSL_PATH 00:06:02.960 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:02.960 #define SPDK_CONFIG_PGO_DIR 00:06:02.960 #undef SPDK_CONFIG_PGO_USE 00:06:02.960 #define SPDK_CONFIG_PREFIX /usr/local 00:06:02.960 #undef SPDK_CONFIG_RAID5F 00:06:02.960 #undef SPDK_CONFIG_RBD 00:06:02.960 #define SPDK_CONFIG_RDMA 1 00:06:02.960 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:02.960 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:02.960 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:02.960 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:02.960 #define SPDK_CONFIG_SHARED 1 00:06:02.960 #undef SPDK_CONFIG_SMA 00:06:02.960 #define SPDK_CONFIG_TESTS 1 00:06:02.960 #undef SPDK_CONFIG_TSAN 00:06:02.960 #define SPDK_CONFIG_UBLK 1 00:06:02.960 #define SPDK_CONFIG_UBSAN 1 00:06:02.960 #undef SPDK_CONFIG_UNIT_TESTS 00:06:02.960 #undef SPDK_CONFIG_URING 00:06:02.960 #define SPDK_CONFIG_URING_PATH 00:06:02.960 #undef SPDK_CONFIG_URING_ZNS 00:06:02.960 #undef SPDK_CONFIG_USDT 00:06:02.960 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:02.960 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:02.960 #define SPDK_CONFIG_VFIO_USER 1 00:06:02.960 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:02.960 #define SPDK_CONFIG_VHOST 1 00:06:02.960 #define SPDK_CONFIG_VIRTIO 1 00:06:02.960 #undef SPDK_CONFIG_VTUNE 00:06:02.960 #define SPDK_CONFIG_VTUNE_DIR 00:06:02.960 #define SPDK_CONFIG_WERROR 1 00:06:02.960 #define SPDK_CONFIG_WPDK_DIR 00:06:02.960 #undef SPDK_CONFIG_XNVME 00:06:02.960 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.960 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:02.961 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2232346 ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2232346 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.sPPJya 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sPPJya/tests/target /tmp/spdk.sPPJya 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.962 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=38662754304 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6420557824 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22486020096 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22540877824 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=778240 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:02.963 * Looking for test storage... 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=38662754304 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8635150336 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.963 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:02.964 23:09:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:05.493 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:05.493 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:05.493 Found net devices under 0000:84:00.0: cvl_0_0 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:05.493 Found net devices under 0000:84:00.1: cvl_0_1 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.493 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:05.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:06:05.494 00:06:05.494 --- 10.0.0.2 ping statistics --- 00:06:05.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.494 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:06:05.494 00:06:05.494 --- 10.0.0.1 ping statistics --- 00:06:05.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.494 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:05.494 ************************************ 00:06:05.494 START TEST nvmf_filesystem_no_in_capsule 00:06:05.494 ************************************ 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2233987 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2233987 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2233987 ']' 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.494 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.494 [2024-07-15 23:09:20.542288] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:06:05.494 [2024-07-15 23:09:20.542383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.494 [2024-07-15 23:09:20.608164] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.494 [2024-07-15 23:09:20.720577] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:05.494 [2024-07-15 23:09:20.720638] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:05.494 [2024-07-15 23:09:20.720667] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.494 [2024-07-15 23:09:20.720682] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.494 [2024-07-15 23:09:20.720692] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:05.494 [2024-07-15 23:09:20.720830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.494 [2024-07-15 23:09:20.720900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.494 [2024-07-15 23:09:20.720930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.494 [2024-07-15 23:09:20.720931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.752 [2024-07-15 23:09:20.890639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.752 23:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.752 Malloc1 00:06:05.752 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.752 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:05.752 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.752 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.010 [2024-07-15 23:09:21.085652] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:06.010 { 00:06:06.010 "name": "Malloc1", 00:06:06.010 "aliases": [ 00:06:06.010 "06ddde99-a11a-49e4-8478-c7d4df99f12c" 00:06:06.010 ], 00:06:06.010 "product_name": "Malloc disk", 00:06:06.010 "block_size": 512, 00:06:06.010 "num_blocks": 1048576, 00:06:06.010 "uuid": "06ddde99-a11a-49e4-8478-c7d4df99f12c", 00:06:06.010 "assigned_rate_limits": { 00:06:06.010 "rw_ios_per_sec": 0, 00:06:06.010 "rw_mbytes_per_sec": 0, 00:06:06.010 "r_mbytes_per_sec": 0, 00:06:06.010 "w_mbytes_per_sec": 0 00:06:06.010 }, 00:06:06.010 "claimed": true, 00:06:06.010 "claim_type": "exclusive_write", 00:06:06.010 "zoned": false, 00:06:06.010 "supported_io_types": { 00:06:06.010 "read": true, 00:06:06.010 "write": true, 00:06:06.010 "unmap": true, 00:06:06.010 "flush": true, 00:06:06.010 "reset": true, 00:06:06.010 "nvme_admin": false, 00:06:06.010 "nvme_io": false, 00:06:06.010 "nvme_io_md": false, 00:06:06.010 "write_zeroes": true, 00:06:06.010 "zcopy": true, 00:06:06.010 "get_zone_info": false, 00:06:06.010 "zone_management": false, 00:06:06.010 "zone_append": false, 00:06:06.010 "compare": false, 00:06:06.010 "compare_and_write": false, 00:06:06.010 "abort": true, 00:06:06.010 "seek_hole": false, 00:06:06.010 "seek_data": false, 00:06:06.010 "copy": true, 00:06:06.010 "nvme_iov_md": false 00:06:06.010 }, 00:06:06.010 "memory_domains": [ 00:06:06.010 { 00:06:06.010 "dma_device_id": "system", 00:06:06.010 "dma_device_type": 1 00:06:06.010 }, 00:06:06.010 { 00:06:06.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.010 "dma_device_type": 2 00:06:06.010 } 00:06:06.010 ], 00:06:06.010 "driver_specific": {} 00:06:06.010 } 00:06:06.010 ]' 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:06.010 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:06.574 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:06.574 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:06.574 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:06.574 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:06.574 23:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:09.096 23:09:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:09.096 23:09:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:09.353 23:09:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.285 ************************************ 00:06:10.285 START TEST filesystem_ext4 00:06:10.285 ************************************ 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:10.285 23:09:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:10.285 mke2fs 1.46.5 (30-Dec-2021) 00:06:10.285 Discarding device blocks: 0/522240 done 00:06:10.285 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:10.285 Filesystem UUID: c29d216f-5c2b-4b6d-ada9-f11bb42ba807 00:06:10.285 Superblock backups stored on blocks: 00:06:10.285 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:10.285 00:06:10.285 Allocating group tables: 0/64 done 00:06:10.285 Writing inode tables: 0/64 done 00:06:10.542 Creating journal (8192 blocks): done 00:06:11.055 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:11.055 00:06:11.055 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:11.055 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2233987 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.312 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.312 00:06:11.313 real 0m1.085s 00:06:11.313 user 0m0.019s 00:06:11.313 sys 0m0.057s 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:11.313 ************************************ 00:06:11.313 END TEST filesystem_ext4 00:06:11.313 ************************************ 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.313 ************************************ 00:06:11.313 START TEST filesystem_btrfs 00:06:11.313 ************************************ 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:11.313 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:11.875 btrfs-progs v6.6.2 00:06:11.875 See https://btrfs.readthedocs.io for more information. 00:06:11.875 00:06:11.875 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:11.875 NOTE: several default settings have changed in version 5.15, please make sure 00:06:11.875 this does not affect your deployments: 00:06:11.875 - DUP for metadata (-m dup) 00:06:11.875 - enabled no-holes (-O no-holes) 00:06:11.875 - enabled free-space-tree (-R free-space-tree) 00:06:11.875 00:06:11.875 Label: (null) 00:06:11.875 UUID: 59079f76-143a-48e2-bc9b-fca36963eafa 00:06:11.875 Node size: 16384 00:06:11.875 Sector size: 4096 00:06:11.875 Filesystem size: 510.00MiB 00:06:11.875 Block group profiles: 00:06:11.875 Data: single 8.00MiB 00:06:11.875 Metadata: DUP 32.00MiB 00:06:11.875 System: DUP 8.00MiB 00:06:11.875 SSD detected: yes 00:06:11.875 Zoned device: no 00:06:11.875 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:11.875 Runtime features: free-space-tree 00:06:11.875 Checksum: crc32c 00:06:11.875 Number of devices: 1 00:06:11.875 Devices: 00:06:11.875 ID SIZE PATH 00:06:11.875 1 510.00MiB /dev/nvme0n1p1 00:06:11.875 00:06:11.875 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:11.875 23:09:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2233987 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:12.807 00:06:12.807 real 0m1.348s 00:06:12.807 user 0m0.021s 00:06:12.807 sys 0m0.120s 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:12.807 ************************************ 00:06:12.807 END TEST filesystem_btrfs 00:06:12.807 ************************************ 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.807 ************************************ 00:06:12.807 START TEST filesystem_xfs 00:06:12.807 ************************************ 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:12.807 23:09:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:12.807 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:12.807 = sectsz=512 attr=2, projid32bit=1 00:06:12.807 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:12.807 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:12.807 data = bsize=4096 blocks=130560, imaxpct=25 00:06:12.807 = sunit=0 swidth=0 blks 00:06:12.807 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:12.807 log =internal log bsize=4096 blocks=16384, version=2 00:06:12.807 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:12.807 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:14.178 Discarding blocks...Done. 00:06:14.178 23:09:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:14.178 23:09:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2233987 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:16.697 00:06:16.697 real 0m3.690s 00:06:16.697 user 0m0.020s 00:06:16.697 sys 0m0.055s 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:16.697 ************************************ 00:06:16.697 END TEST filesystem_xfs 00:06:16.697 ************************************ 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:16.697 23:09:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:16.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2233987 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2233987 ']' 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2233987 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2233987 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2233987' 00:06:16.954 killing process with pid 2233987 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2233987 00:06:16.954 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2233987 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:17.517 00:06:17.517 real 0m12.066s 00:06:17.517 user 0m46.186s 00:06:17.517 sys 0m1.751s 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.517 ************************************ 00:06:17.517 END TEST nvmf_filesystem_no_in_capsule 00:06:17.517 ************************************ 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.517 ************************************ 00:06:17.517 START TEST nvmf_filesystem_in_capsule 00:06:17.517 ************************************ 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2235672 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2235672 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2235672 ']' 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.517 23:09:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.517 [2024-07-15 23:09:32.657364] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:06:17.517 [2024-07-15 23:09:32.657451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.517 [2024-07-15 23:09:32.725912] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.773 [2024-07-15 23:09:32.842807] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.773 [2024-07-15 23:09:32.842863] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.773 [2024-07-15 23:09:32.842880] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.773 [2024-07-15 23:09:32.842894] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.773 [2024-07-15 23:09:32.842906] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.773 [2024-07-15 23:09:32.842972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.773 [2024-07-15 23:09:32.843053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.773 [2024-07-15 23:09:32.843148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.773 [2024-07-15 23:09:32.843151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.334 [2024-07-15 23:09:33.633695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.334 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.591 Malloc1 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.591 [2024-07-15 23:09:33.820242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:18.591 { 00:06:18.591 "name": "Malloc1", 00:06:18.591 "aliases": [ 00:06:18.591 "cd6fcd46-5e9c-4929-8fb4-b007b3004c8a" 00:06:18.591 ], 00:06:18.591 "product_name": "Malloc disk", 00:06:18.591 "block_size": 512, 00:06:18.591 "num_blocks": 1048576, 00:06:18.591 "uuid": "cd6fcd46-5e9c-4929-8fb4-b007b3004c8a", 00:06:18.591 "assigned_rate_limits": { 00:06:18.591 "rw_ios_per_sec": 0, 00:06:18.591 "rw_mbytes_per_sec": 0, 00:06:18.591 "r_mbytes_per_sec": 0, 00:06:18.591 "w_mbytes_per_sec": 0 00:06:18.591 }, 00:06:18.591 "claimed": true, 00:06:18.591 "claim_type": "exclusive_write", 00:06:18.591 "zoned": false, 00:06:18.591 "supported_io_types": { 00:06:18.591 "read": true, 00:06:18.591 "write": true, 00:06:18.591 "unmap": true, 00:06:18.591 "flush": true, 00:06:18.591 "reset": true, 00:06:18.591 "nvme_admin": false, 00:06:18.591 "nvme_io": false, 00:06:18.591 "nvme_io_md": false, 00:06:18.591 "write_zeroes": true, 00:06:18.591 "zcopy": true, 00:06:18.591 "get_zone_info": false, 00:06:18.591 "zone_management": false, 00:06:18.591 "zone_append": false, 00:06:18.591 "compare": false, 00:06:18.591 "compare_and_write": false, 00:06:18.591 "abort": true, 00:06:18.591 "seek_hole": false, 00:06:18.591 "seek_data": false, 00:06:18.591 "copy": true, 00:06:18.591 "nvme_iov_md": false 00:06:18.591 }, 00:06:18.591 "memory_domains": [ 00:06:18.591 { 00:06:18.591 "dma_device_id": "system", 00:06:18.591 "dma_device_type": 1 00:06:18.591 }, 00:06:18.591 { 00:06:18.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.591 "dma_device_type": 2 00:06:18.591 } 00:06:18.591 ], 00:06:18.591 "driver_specific": {} 00:06:18.591 } 00:06:18.591 ]' 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:18.591 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:18.848 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:18.848 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:18.848 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:18.848 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:18.848 23:09:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:19.414 23:09:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:19.414 23:09:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:19.414 23:09:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:19.414 23:09:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:19.414 23:09:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:21.309 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:21.567 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:21.824 23:09:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:22.388 23:09:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:23.318 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:23.318 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:23.318 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.318 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.318 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.318 ************************************ 00:06:23.319 START TEST filesystem_in_capsule_ext4 00:06:23.319 ************************************ 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:23.319 23:09:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:23.319 mke2fs 1.46.5 (30-Dec-2021) 00:06:23.576 Discarding device blocks: 0/522240 done 00:06:23.576 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:23.576 Filesystem UUID: a7031141-25ae-4c8f-bb96-afe15babc8aa 00:06:23.576 Superblock backups stored on blocks: 00:06:23.576 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:23.576 00:06:23.576 Allocating group tables: 0/64 done 00:06:23.576 Writing inode tables: 0/64 done 00:06:26.179 Creating journal (8192 blocks): done 00:06:26.179 Writing superblocks and filesystem accounting information: 0/64 done 00:06:26.179 00:06:26.179 23:09:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:26.179 23:09:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2235672 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.109 00:06:27.109 real 0m3.686s 00:06:27.109 user 0m0.015s 00:06:27.109 sys 0m0.061s 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 ************************************ 00:06:27.109 END TEST filesystem_in_capsule_ext4 00:06:27.109 ************************************ 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:27.109 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.110 ************************************ 00:06:27.110 START TEST filesystem_in_capsule_btrfs 00:06:27.110 ************************************ 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:27.110 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:27.366 btrfs-progs v6.6.2 00:06:27.366 See https://btrfs.readthedocs.io for more information. 00:06:27.366 00:06:27.366 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:27.366 NOTE: several default settings have changed in version 5.15, please make sure 00:06:27.366 this does not affect your deployments: 00:06:27.366 - DUP for metadata (-m dup) 00:06:27.366 - enabled no-holes (-O no-holes) 00:06:27.366 - enabled free-space-tree (-R free-space-tree) 00:06:27.366 00:06:27.366 Label: (null) 00:06:27.366 UUID: 2fae71e6-64c0-43de-acf3-41221c8f28c0 00:06:27.366 Node size: 16384 00:06:27.366 Sector size: 4096 00:06:27.366 Filesystem size: 510.00MiB 00:06:27.366 Block group profiles: 00:06:27.366 Data: single 8.00MiB 00:06:27.366 Metadata: DUP 32.00MiB 00:06:27.366 System: DUP 8.00MiB 00:06:27.366 SSD detected: yes 00:06:27.366 Zoned device: no 00:06:27.366 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:27.366 Runtime features: free-space-tree 00:06:27.366 Checksum: crc32c 00:06:27.366 Number of devices: 1 00:06:27.366 Devices: 00:06:27.366 ID SIZE PATH 00:06:27.366 1 510.00MiB /dev/nvme0n1p1 00:06:27.366 00:06:27.366 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:27.366 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2235672 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.931 00:06:27.931 real 0m0.676s 00:06:27.931 user 0m0.017s 00:06:27.931 sys 0m0.113s 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.931 23:09:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:27.931 ************************************ 00:06:27.931 END TEST filesystem_in_capsule_btrfs 00:06:27.931 ************************************ 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.931 ************************************ 00:06:27.931 START TEST filesystem_in_capsule_xfs 00:06:27.931 ************************************ 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:27.931 23:09:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:27.931 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:27.931 = sectsz=512 attr=2, projid32bit=1 00:06:27.931 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:27.931 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:27.931 data = bsize=4096 blocks=130560, imaxpct=25 00:06:27.931 = sunit=0 swidth=0 blks 00:06:27.931 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:27.931 log =internal log bsize=4096 blocks=16384, version=2 00:06:27.931 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:27.931 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:28.860 Discarding blocks...Done. 00:06:28.860 23:09:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:28.860 23:09:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2235672 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.381 00:06:31.381 real 0m3.522s 00:06:31.381 user 0m0.017s 00:06:31.381 sys 0m0.052s 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:31.381 ************************************ 00:06:31.381 END TEST filesystem_in_capsule_xfs 00:06:31.381 ************************************ 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:31.381 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:31.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2235672 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2235672 ']' 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2235672 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2235672 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2235672' 00:06:31.637 killing process with pid 2235672 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2235672 00:06:31.637 23:09:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2235672 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:32.200 00:06:32.200 real 0m14.695s 00:06:32.200 user 0m56.640s 00:06:32.200 sys 0m1.957s 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.200 ************************************ 00:06:32.200 END TEST nvmf_filesystem_in_capsule 00:06:32.200 ************************************ 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:32.200 rmmod nvme_tcp 00:06:32.200 rmmod nvme_fabrics 00:06:32.200 rmmod nvme_keyring 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.200 23:09:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.729 23:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:34.729 00:06:34.729 real 0m31.391s 00:06:34.729 user 1m43.760s 00:06:34.729 sys 0m5.412s 00:06:34.729 23:09:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.729 23:09:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.729 ************************************ 00:06:34.729 END TEST nvmf_filesystem 00:06:34.729 ************************************ 00:06:34.729 23:09:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:34.729 23:09:49 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:34.729 23:09:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:34.729 23:09:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.729 23:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.729 ************************************ 00:06:34.729 START TEST nvmf_target_discovery 00:06:34.729 ************************************ 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:34.729 * Looking for test storage... 00:06:34.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.729 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:34.730 23:09:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:36.631 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:36.632 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:36.632 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:36.632 Found net devices under 0000:84:00.0: cvl_0_0 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:36.632 Found net devices under 0000:84:00.1: cvl_0_1 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:36.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:06:36.632 00:06:36.632 --- 10.0.0.2 ping statistics --- 00:06:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.632 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:06:36.632 00:06:36.632 --- 10.0.0.1 ping statistics --- 00:06:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.632 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2239566 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2239566 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2239566 ']' 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.632 23:09:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.632 [2024-07-15 23:09:51.694454] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:06:36.632 [2024-07-15 23:09:51.694550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.632 [2024-07-15 23:09:51.761159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.632 [2024-07-15 23:09:51.874058] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.633 [2024-07-15 23:09:51.874118] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.633 [2024-07-15 23:09:51.874148] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.633 [2024-07-15 23:09:51.874160] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.633 [2024-07-15 23:09:51.874170] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.633 [2024-07-15 23:09:51.874220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.633 [2024-07-15 23:09:51.874279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.633 [2024-07-15 23:09:51.874307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.633 [2024-07-15 23:09:51.874310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:36.890 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 [2024-07-15 23:09:52.032681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 Null1 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 [2024-07-15 23:09:52.073036] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 Null2 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 Null3 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 Null4 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.891 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:06:37.148 00:06:37.148 Discovery Log Number of Records 6, Generation counter 6 00:06:37.148 =====Discovery Log Entry 0====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: current discovery subsystem 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4420 00:06:37.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: explicit discovery connections, duplicate discovery information 00:06:37.148 sectype: none 00:06:37.148 =====Discovery Log Entry 1====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: nvme subsystem 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4420 00:06:37.148 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: none 00:06:37.148 sectype: none 00:06:37.148 =====Discovery Log Entry 2====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: nvme subsystem 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4420 00:06:37.148 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: none 00:06:37.148 sectype: none 00:06:37.148 =====Discovery Log Entry 3====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: nvme subsystem 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4420 00:06:37.148 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: none 00:06:37.148 sectype: none 00:06:37.148 =====Discovery Log Entry 4====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: nvme subsystem 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4420 00:06:37.148 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: none 00:06:37.148 sectype: none 00:06:37.148 =====Discovery Log Entry 5====== 00:06:37.148 trtype: tcp 00:06:37.148 adrfam: ipv4 00:06:37.148 subtype: discovery subsystem referral 00:06:37.148 treq: not required 00:06:37.148 portid: 0 00:06:37.148 trsvcid: 4430 00:06:37.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:37.148 traddr: 10.0.0.2 00:06:37.148 eflags: none 00:06:37.148 sectype: none 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:37.148 Perform nvmf subsystem discovery via RPC 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 [ 00:06:37.148 { 00:06:37.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:37.148 "subtype": "Discovery", 00:06:37.148 "listen_addresses": [ 00:06:37.148 { 00:06:37.148 "trtype": "TCP", 00:06:37.148 "adrfam": "IPv4", 00:06:37.148 "traddr": "10.0.0.2", 00:06:37.148 "trsvcid": "4420" 00:06:37.148 } 00:06:37.148 ], 00:06:37.148 "allow_any_host": true, 00:06:37.148 "hosts": [] 00:06:37.148 }, 00:06:37.148 { 00:06:37.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:37.148 "subtype": "NVMe", 00:06:37.148 "listen_addresses": [ 00:06:37.148 { 00:06:37.148 "trtype": "TCP", 00:06:37.148 "adrfam": "IPv4", 00:06:37.148 "traddr": "10.0.0.2", 00:06:37.148 "trsvcid": "4420" 00:06:37.148 } 00:06:37.148 ], 00:06:37.148 "allow_any_host": true, 00:06:37.148 "hosts": [], 00:06:37.148 "serial_number": "SPDK00000000000001", 00:06:37.148 "model_number": "SPDK bdev Controller", 00:06:37.148 "max_namespaces": 32, 00:06:37.148 "min_cntlid": 1, 00:06:37.148 "max_cntlid": 65519, 00:06:37.148 "namespaces": [ 00:06:37.148 { 00:06:37.148 "nsid": 1, 00:06:37.148 "bdev_name": "Null1", 00:06:37.148 "name": "Null1", 00:06:37.148 "nguid": "8E11E22899BD4C378399122EE9154D8B", 00:06:37.148 "uuid": "8e11e228-99bd-4c37-8399-122ee9154d8b" 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 }, 00:06:37.148 { 00:06:37.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:37.148 "subtype": "NVMe", 00:06:37.148 "listen_addresses": [ 00:06:37.148 { 00:06:37.148 "trtype": "TCP", 00:06:37.148 "adrfam": "IPv4", 00:06:37.148 "traddr": "10.0.0.2", 00:06:37.148 "trsvcid": "4420" 00:06:37.148 } 00:06:37.148 ], 00:06:37.148 "allow_any_host": true, 00:06:37.148 "hosts": [], 00:06:37.148 "serial_number": "SPDK00000000000002", 00:06:37.148 "model_number": "SPDK bdev Controller", 00:06:37.148 "max_namespaces": 32, 00:06:37.148 "min_cntlid": 1, 00:06:37.148 "max_cntlid": 65519, 00:06:37.148 "namespaces": [ 00:06:37.148 { 00:06:37.148 "nsid": 1, 00:06:37.148 "bdev_name": "Null2", 00:06:37.148 "name": "Null2", 00:06:37.148 "nguid": "A3ED496201DE4E8CA42D4F4E4073DEFC", 00:06:37.148 "uuid": "a3ed4962-01de-4e8c-a42d-4f4e4073defc" 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 }, 00:06:37.148 { 00:06:37.148 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:37.148 "subtype": "NVMe", 00:06:37.148 "listen_addresses": [ 00:06:37.148 { 00:06:37.148 "trtype": "TCP", 00:06:37.148 "adrfam": "IPv4", 00:06:37.148 "traddr": "10.0.0.2", 00:06:37.148 "trsvcid": "4420" 00:06:37.148 } 00:06:37.148 ], 00:06:37.148 "allow_any_host": true, 00:06:37.148 "hosts": [], 00:06:37.148 "serial_number": "SPDK00000000000003", 00:06:37.148 "model_number": "SPDK bdev Controller", 00:06:37.148 "max_namespaces": 32, 00:06:37.148 "min_cntlid": 1, 00:06:37.148 "max_cntlid": 65519, 00:06:37.148 "namespaces": [ 00:06:37.148 { 00:06:37.148 "nsid": 1, 00:06:37.148 "bdev_name": "Null3", 00:06:37.148 "name": "Null3", 00:06:37.148 "nguid": "E3E69ADA563E4A7E8EC9130345A9D735", 00:06:37.148 "uuid": "e3e69ada-563e-4a7e-8ec9-130345a9d735" 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 }, 00:06:37.148 { 00:06:37.148 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:37.148 "subtype": "NVMe", 00:06:37.148 "listen_addresses": [ 00:06:37.148 { 00:06:37.148 "trtype": "TCP", 00:06:37.148 "adrfam": "IPv4", 00:06:37.148 "traddr": "10.0.0.2", 00:06:37.148 "trsvcid": "4420" 00:06:37.148 } 00:06:37.148 ], 00:06:37.148 "allow_any_host": true, 00:06:37.148 "hosts": [], 00:06:37.148 "serial_number": "SPDK00000000000004", 00:06:37.148 "model_number": "SPDK bdev Controller", 00:06:37.148 "max_namespaces": 32, 00:06:37.148 "min_cntlid": 1, 00:06:37.148 "max_cntlid": 65519, 00:06:37.148 "namespaces": [ 00:06:37.148 { 00:06:37.148 "nsid": 1, 00:06:37.148 "bdev_name": "Null4", 00:06:37.148 "name": "Null4", 00:06:37.148 "nguid": "96CA83A87A5A41B7B64B5DBB54FF5BF2", 00:06:37.148 "uuid": "96ca83a8-7a5a-41b7-b64b-5dbb54ff5bf2" 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 } 00:06:37.148 ] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.148 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.406 rmmod nvme_tcp 00:06:37.406 rmmod nvme_fabrics 00:06:37.406 rmmod nvme_keyring 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2239566 ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2239566 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2239566 ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2239566 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2239566 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2239566' 00:06:37.406 killing process with pid 2239566 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2239566 00:06:37.406 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2239566 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.664 23:09:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.567 23:09:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:39.567 00:06:39.567 real 0m5.390s 00:06:39.567 user 0m4.393s 00:06:39.567 sys 0m1.814s 00:06:39.567 23:09:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.567 23:09:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.567 ************************************ 00:06:39.567 END TEST nvmf_target_discovery 00:06:39.567 ************************************ 00:06:39.825 23:09:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:39.825 23:09:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:39.825 23:09:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:39.825 23:09:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.825 23:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.825 ************************************ 00:06:39.825 START TEST nvmf_referrals 00:06:39.825 ************************************ 00:06:39.825 23:09:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:39.826 * Looking for test storage... 00:06:39.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:39.826 23:09:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.354 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:42.355 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:42.355 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:42.355 Found net devices under 0000:84:00.0: cvl_0_0 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:42.355 Found net devices under 0000:84:00.1: cvl_0_1 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:06:42.355 00:06:42.355 --- 10.0.0.2 ping statistics --- 00:06:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.355 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:06:42.355 00:06:42.355 --- 10.0.0.1 ping statistics --- 00:06:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.355 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2241669 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2241669 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2241669 ']' 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.355 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.355 [2024-07-15 23:09:57.325510] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:06:42.355 [2024-07-15 23:09:57.325608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.355 [2024-07-15 23:09:57.392970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.355 [2024-07-15 23:09:57.509587] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.355 [2024-07-15 23:09:57.509640] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.355 [2024-07-15 23:09:57.509668] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.355 [2024-07-15 23:09:57.509680] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.356 [2024-07-15 23:09:57.509689] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.356 [2024-07-15 23:09:57.509839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.356 [2024-07-15 23:09:57.509909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.356 [2024-07-15 23:09:57.509942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.356 [2024-07-15 23:09:57.509944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.356 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.356 [2024-07-15 23:09:57.668672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 [2024-07-15 23:09:57.680946] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.613 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:42.869 23:09:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.869 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.870 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.127 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:43.384 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.642 23:09:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.899 rmmod nvme_tcp 00:06:43.899 rmmod nvme_fabrics 00:06:43.899 rmmod nvme_keyring 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2241669 ']' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2241669 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2241669 ']' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2241669 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.899 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2241669 00:06:44.157 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.157 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.157 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2241669' 00:06:44.157 killing process with pid 2241669 00:06:44.157 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2241669 00:06:44.157 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2241669 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.414 23:09:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.315 23:10:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:46.315 00:06:46.315 real 0m6.617s 00:06:46.315 user 0m9.256s 00:06:46.315 sys 0m2.172s 00:06:46.315 23:10:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.315 23:10:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.315 ************************************ 00:06:46.315 END TEST nvmf_referrals 00:06:46.315 ************************************ 00:06:46.315 23:10:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:46.315 23:10:01 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:46.315 23:10:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:46.315 23:10:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.315 23:10:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.315 ************************************ 00:06:46.315 START TEST nvmf_connect_disconnect 00:06:46.315 ************************************ 00:06:46.315 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:46.315 * Looking for test storage... 00:06:46.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.316 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.316 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:46.574 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.575 23:10:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:48.474 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:48.474 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:48.474 Found net devices under 0000:84:00.0: cvl_0_0 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.474 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:48.475 Found net devices under 0000:84:00.1: cvl_0_1 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:06:48.475 00:06:48.475 --- 10.0.0.2 ping statistics --- 00:06:48.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.475 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:06:48.475 00:06:48.475 --- 10.0.0.1 ping statistics --- 00:06:48.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.475 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2243861 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2243861 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2243861 ']' 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.475 23:10:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 [2024-07-15 23:10:03.762092] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:06:48.475 [2024-07-15 23:10:03.762190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.731 [2024-07-15 23:10:03.827937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.732 [2024-07-15 23:10:03.941268] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.732 [2024-07-15 23:10:03.941319] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.732 [2024-07-15 23:10:03.941348] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.732 [2024-07-15 23:10:03.941360] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.732 [2024-07-15 23:10:03.941370] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.732 [2024-07-15 23:10:03.941453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.732 [2024-07-15 23:10:03.941517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.732 [2024-07-15 23:10:03.941544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.732 [2024-07-15 23:10:03.941547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 [2024-07-15 23:10:04.103691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.988 [2024-07-15 23:10:04.161114] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:48.988 23:10:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:52.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.244 rmmod nvme_tcp 00:07:03.244 rmmod nvme_fabrics 00:07:03.244 rmmod nvme_keyring 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2243861 ']' 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2243861 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2243861 ']' 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2243861 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2243861 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2243861' 00:07:03.244 killing process with pid 2243861 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2243861 00:07:03.244 23:10:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2243861 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.244 23:10:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.142 23:10:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.142 00:07:05.142 real 0m18.762s 00:07:05.142 user 0m56.665s 00:07:05.142 sys 0m3.282s 00:07:05.142 23:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.142 23:10:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.142 ************************************ 00:07:05.142 END TEST nvmf_connect_disconnect 00:07:05.142 ************************************ 00:07:05.142 23:10:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:05.142 23:10:20 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.142 23:10:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.142 23:10:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.142 23:10:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.142 ************************************ 00:07:05.142 START TEST nvmf_multitarget 00:07:05.142 ************************************ 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.142 * Looking for test storage... 00:07:05.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.142 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.143 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.143 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.143 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.401 23:10:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:07.299 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:07.299 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.299 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:07.300 Found net devices under 0000:84:00.0: cvl_0_0 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:07.300 Found net devices under 0000:84:00.1: cvl_0_1 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.300 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.557 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:07:07.558 00:07:07.558 --- 10.0.0.2 ping statistics --- 00:07:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.558 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:07.558 00:07:07.558 --- 10.0.0.1 ping statistics --- 00:07:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.558 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2247645 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2247645 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2247645 ']' 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.558 23:10:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.558 [2024-07-15 23:10:22.809677] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:07:07.558 [2024-07-15 23:10:22.809777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.815 [2024-07-15 23:10:22.875897] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.815 [2024-07-15 23:10:22.985338] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.816 [2024-07-15 23:10:22.985399] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.816 [2024-07-15 23:10:22.985413] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.816 [2024-07-15 23:10:22.985424] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.816 [2024-07-15 23:10:22.985434] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.816 [2024-07-15 23:10:22.985515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.816 [2024-07-15 23:10:22.985583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.816 [2024-07-15 23:10:22.985648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.816 [2024-07-15 23:10:22.985652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.816 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.816 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:07.816 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.816 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.816 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:08.084 "nvmf_tgt_1" 00:07:08.084 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:08.343 "nvmf_tgt_2" 00:07:08.343 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:08.343 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:08.343 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:08.343 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:08.600 true 00:07:08.600 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:08.600 true 00:07:08.600 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:08.600 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.857 rmmod nvme_tcp 00:07:08.857 rmmod nvme_fabrics 00:07:08.857 rmmod nvme_keyring 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2247645 ']' 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2247645 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2247645 ']' 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2247645 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.857 23:10:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247645 00:07:08.857 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.857 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.857 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247645' 00:07:08.857 killing process with pid 2247645 00:07:08.857 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2247645 00:07:08.857 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2247645 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.114 23:10:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.016 23:10:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.274 00:07:11.274 real 0m5.939s 00:07:11.274 user 0m6.576s 00:07:11.274 sys 0m1.982s 00:07:11.275 23:10:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.275 23:10:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:11.275 ************************************ 00:07:11.275 END TEST nvmf_multitarget 00:07:11.275 ************************************ 00:07:11.275 23:10:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:11.275 23:10:26 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:11.275 23:10:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:11.275 23:10:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.275 23:10:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.275 ************************************ 00:07:11.275 START TEST nvmf_rpc 00:07:11.275 ************************************ 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:11.275 * Looking for test storage... 00:07:11.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.275 23:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:13.805 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:13.805 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:13.805 Found net devices under 0000:84:00.0: cvl_0_0 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:13.805 Found net devices under 0000:84:00.1: cvl_0_1 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.805 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:07:13.806 00:07:13.806 --- 10.0.0.2 ping statistics --- 00:07:13.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.806 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:13.806 00:07:13.806 --- 10.0.0.1 ping statistics --- 00:07:13.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.806 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2249751 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2249751 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2249751 ']' 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.806 23:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 [2024-07-15 23:10:28.725820] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:07:13.806 [2024-07-15 23:10:28.725902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.806 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.806 [2024-07-15 23:10:28.805873] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.806 [2024-07-15 23:10:28.940012] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.806 [2024-07-15 23:10:28.940097] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.806 [2024-07-15 23:10:28.940137] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.806 [2024-07-15 23:10:28.940158] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.806 [2024-07-15 23:10:28.940192] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.806 [2024-07-15 23:10:28.940417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.806 [2024-07-15 23:10:28.940480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.806 [2024-07-15 23:10:28.940545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.806 [2024-07-15 23:10:28.940554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:13.806 "tick_rate": 2700000000, 00:07:13.806 "poll_groups": [ 00:07:13.806 { 00:07:13.806 "name": "nvmf_tgt_poll_group_000", 00:07:13.806 "admin_qpairs": 0, 00:07:13.806 "io_qpairs": 0, 00:07:13.806 "current_admin_qpairs": 0, 00:07:13.806 "current_io_qpairs": 0, 00:07:13.806 "pending_bdev_io": 0, 00:07:13.806 "completed_nvme_io": 0, 00:07:13.806 "transports": [] 00:07:13.806 }, 00:07:13.806 { 00:07:13.806 "name": "nvmf_tgt_poll_group_001", 00:07:13.806 "admin_qpairs": 0, 00:07:13.806 "io_qpairs": 0, 00:07:13.806 "current_admin_qpairs": 0, 00:07:13.806 "current_io_qpairs": 0, 00:07:13.806 "pending_bdev_io": 0, 00:07:13.806 "completed_nvme_io": 0, 00:07:13.806 "transports": [] 00:07:13.806 }, 00:07:13.806 { 00:07:13.806 "name": "nvmf_tgt_poll_group_002", 00:07:13.806 "admin_qpairs": 0, 00:07:13.806 "io_qpairs": 0, 00:07:13.806 "current_admin_qpairs": 0, 00:07:13.806 "current_io_qpairs": 0, 00:07:13.806 "pending_bdev_io": 0, 00:07:13.806 "completed_nvme_io": 0, 00:07:13.806 "transports": [] 00:07:13.806 }, 00:07:13.806 { 00:07:13.806 "name": "nvmf_tgt_poll_group_003", 00:07:13.806 "admin_qpairs": 0, 00:07:13.806 "io_qpairs": 0, 00:07:13.806 "current_admin_qpairs": 0, 00:07:13.806 "current_io_qpairs": 0, 00:07:13.806 "pending_bdev_io": 0, 00:07:13.806 "completed_nvme_io": 0, 00:07:13.806 "transports": [] 00:07:13.806 } 00:07:13.806 ] 00:07:13.806 }' 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:13.806 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.064 [2024-07-15 23:10:29.196906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:14.064 "tick_rate": 2700000000, 00:07:14.064 "poll_groups": [ 00:07:14.064 { 00:07:14.064 "name": "nvmf_tgt_poll_group_000", 00:07:14.064 "admin_qpairs": 0, 00:07:14.064 "io_qpairs": 0, 00:07:14.064 "current_admin_qpairs": 0, 00:07:14.064 "current_io_qpairs": 0, 00:07:14.064 "pending_bdev_io": 0, 00:07:14.064 "completed_nvme_io": 0, 00:07:14.064 "transports": [ 00:07:14.064 { 00:07:14.064 "trtype": "TCP" 00:07:14.064 } 00:07:14.064 ] 00:07:14.064 }, 00:07:14.064 { 00:07:14.064 "name": "nvmf_tgt_poll_group_001", 00:07:14.064 "admin_qpairs": 0, 00:07:14.064 "io_qpairs": 0, 00:07:14.064 "current_admin_qpairs": 0, 00:07:14.064 "current_io_qpairs": 0, 00:07:14.064 "pending_bdev_io": 0, 00:07:14.064 "completed_nvme_io": 0, 00:07:14.064 "transports": [ 00:07:14.064 { 00:07:14.064 "trtype": "TCP" 00:07:14.064 } 00:07:14.064 ] 00:07:14.064 }, 00:07:14.064 { 00:07:14.064 "name": "nvmf_tgt_poll_group_002", 00:07:14.064 "admin_qpairs": 0, 00:07:14.064 "io_qpairs": 0, 00:07:14.064 "current_admin_qpairs": 0, 00:07:14.064 "current_io_qpairs": 0, 00:07:14.064 "pending_bdev_io": 0, 00:07:14.064 "completed_nvme_io": 0, 00:07:14.064 "transports": [ 00:07:14.064 { 00:07:14.064 "trtype": "TCP" 00:07:14.064 } 00:07:14.064 ] 00:07:14.064 }, 00:07:14.064 { 00:07:14.064 "name": "nvmf_tgt_poll_group_003", 00:07:14.064 "admin_qpairs": 0, 00:07:14.064 "io_qpairs": 0, 00:07:14.064 "current_admin_qpairs": 0, 00:07:14.064 "current_io_qpairs": 0, 00:07:14.064 "pending_bdev_io": 0, 00:07:14.064 "completed_nvme_io": 0, 00:07:14.064 "transports": [ 00:07:14.064 { 00:07:14.064 "trtype": "TCP" 00:07:14.064 } 00:07:14.064 ] 00:07:14.064 } 00:07:14.064 ] 00:07:14.064 }' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.064 Malloc1 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.064 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 [2024-07-15 23:10:29.344279] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:14.065 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:14.065 [2024-07-15 23:10:29.366940] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:14.322 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:14.322 could not add new controller: failed to write to nvme-fabrics device 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.322 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.887 23:10:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.887 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:14.887 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.887 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:14.887 23:10:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:16.781 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.039 [2024-07-15 23:10:32.165906] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:17.039 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:17.039 could not add new controller: failed to write to nvme-fabrics device 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.039 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.601 23:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.601 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.601 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.601 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.601 23:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.123 23:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.123 [2024-07-15 23:10:35.009674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.123 23:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.687 23:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.687 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.687 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.687 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.687 23:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 [2024-07-15 23:10:37.885653] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.680 23:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.243 23:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.243 23:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:23.243 23:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.243 23:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:23.243 23:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 [2024-07-15 23:10:40.635817] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.765 23:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.329 23:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.329 23:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.329 23:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.329 23:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.329 23:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 [2024-07-15 23:10:43.477499] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.225 23:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.163 23:10:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.163 23:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.163 23:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.163 23:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.163 23:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.075 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.076 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 [2024-07-15 23:10:46.388888] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.335 23:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.902 23:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.902 23:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:31.902 23:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.902 23:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:31.902 23:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:33.808 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 [2024-07-15 23:10:49.185311] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.068 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 [2024-07-15 23:10:49.233364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 [2024-07-15 23:10:49.281533] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 [2024-07-15 23:10:49.329685] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.069 [2024-07-15 23:10:49.377899] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.069 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:34.328 "tick_rate": 2700000000, 00:07:34.328 "poll_groups": [ 00:07:34.328 { 00:07:34.328 "name": "nvmf_tgt_poll_group_000", 00:07:34.328 "admin_qpairs": 2, 00:07:34.328 "io_qpairs": 84, 00:07:34.328 "current_admin_qpairs": 0, 00:07:34.328 "current_io_qpairs": 0, 00:07:34.328 "pending_bdev_io": 0, 00:07:34.328 "completed_nvme_io": 208, 00:07:34.328 "transports": [ 00:07:34.328 { 00:07:34.328 "trtype": "TCP" 00:07:34.328 } 00:07:34.328 ] 00:07:34.328 }, 00:07:34.328 { 00:07:34.328 "name": "nvmf_tgt_poll_group_001", 00:07:34.328 "admin_qpairs": 2, 00:07:34.328 "io_qpairs": 84, 00:07:34.328 "current_admin_qpairs": 0, 00:07:34.328 "current_io_qpairs": 0, 00:07:34.328 "pending_bdev_io": 0, 00:07:34.328 "completed_nvme_io": 100, 00:07:34.328 "transports": [ 00:07:34.328 { 00:07:34.328 "trtype": "TCP" 00:07:34.328 } 00:07:34.328 ] 00:07:34.328 }, 00:07:34.328 { 00:07:34.328 "name": "nvmf_tgt_poll_group_002", 00:07:34.328 "admin_qpairs": 1, 00:07:34.328 "io_qpairs": 84, 00:07:34.328 "current_admin_qpairs": 0, 00:07:34.328 "current_io_qpairs": 0, 00:07:34.328 "pending_bdev_io": 0, 00:07:34.328 "completed_nvme_io": 172, 00:07:34.328 "transports": [ 00:07:34.328 { 00:07:34.328 "trtype": "TCP" 00:07:34.328 } 00:07:34.328 ] 00:07:34.328 }, 00:07:34.328 { 00:07:34.328 "name": "nvmf_tgt_poll_group_003", 00:07:34.328 "admin_qpairs": 2, 00:07:34.328 "io_qpairs": 84, 00:07:34.328 "current_admin_qpairs": 0, 00:07:34.328 "current_io_qpairs": 0, 00:07:34.328 "pending_bdev_io": 0, 00:07:34.328 "completed_nvme_io": 206, 00:07:34.328 "transports": [ 00:07:34.328 { 00:07:34.328 "trtype": "TCP" 00:07:34.328 } 00:07:34.328 ] 00:07:34.328 } 00:07:34.328 ] 00:07:34.328 }' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.328 rmmod nvme_tcp 00:07:34.328 rmmod nvme_fabrics 00:07:34.328 rmmod nvme_keyring 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2249751 ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2249751 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2249751 ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2249751 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249751 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249751' 00:07:34.328 killing process with pid 2249751 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2249751 00:07:34.328 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2249751 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.587 23:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.122 23:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.122 00:07:37.122 real 0m25.563s 00:07:37.122 user 1m22.982s 00:07:37.122 sys 0m4.284s 00:07:37.122 23:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.122 23:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.122 ************************************ 00:07:37.122 END TEST nvmf_rpc 00:07:37.122 ************************************ 00:07:37.122 23:10:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.122 23:10:51 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:37.122 23:10:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.122 23:10:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.122 23:10:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.122 ************************************ 00:07:37.122 START TEST nvmf_invalid 00:07:37.122 ************************************ 00:07:37.122 23:10:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:37.122 * Looking for test storage... 00:07:37.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.123 23:10:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:39.024 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:39.024 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:39.024 Found net devices under 0000:84:00.0: cvl_0_0 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:39.024 Found net devices under 0000:84:00.1: cvl_0_1 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:39.024 00:07:39.024 --- 10.0.0.2 ping statistics --- 00:07:39.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.024 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:39.024 00:07:39.024 --- 10.0.0.1 ping statistics --- 00:07:39.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.024 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2254395 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2254395 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2254395 ']' 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.024 23:10:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:39.024 [2024-07-15 23:10:54.306385] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:07:39.024 [2024-07-15 23:10:54.306465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.282 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.282 [2024-07-15 23:10:54.376670] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.282 [2024-07-15 23:10:54.497430] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.282 [2024-07-15 23:10:54.497497] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.282 [2024-07-15 23:10:54.497514] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.282 [2024-07-15 23:10:54.497528] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.282 [2024-07-15 23:10:54.497540] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.282 [2024-07-15 23:10:54.497618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.282 [2024-07-15 23:10:54.497673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.282 [2024-07-15 23:10:54.497726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.282 [2024-07-15 23:10:54.497729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:40.216 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14005 00:07:40.473 [2024-07-15 23:10:55.547479] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:40.473 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:40.473 { 00:07:40.473 "nqn": "nqn.2016-06.io.spdk:cnode14005", 00:07:40.473 "tgt_name": "foobar", 00:07:40.473 "method": "nvmf_create_subsystem", 00:07:40.473 "req_id": 1 00:07:40.473 } 00:07:40.473 Got JSON-RPC error response 00:07:40.473 response: 00:07:40.473 { 00:07:40.473 "code": -32603, 00:07:40.473 "message": "Unable to find target foobar" 00:07:40.473 }' 00:07:40.473 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:40.473 { 00:07:40.473 "nqn": "nqn.2016-06.io.spdk:cnode14005", 00:07:40.473 "tgt_name": "foobar", 00:07:40.473 "method": "nvmf_create_subsystem", 00:07:40.473 "req_id": 1 00:07:40.473 } 00:07:40.473 Got JSON-RPC error response 00:07:40.473 response: 00:07:40.473 { 00:07:40.473 "code": -32603, 00:07:40.473 "message": "Unable to find target foobar" 00:07:40.473 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:40.473 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:40.473 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8250 00:07:40.731 [2024-07-15 23:10:55.832467] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8250: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:40.731 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:40.731 { 00:07:40.731 "nqn": "nqn.2016-06.io.spdk:cnode8250", 00:07:40.731 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:40.731 "method": "nvmf_create_subsystem", 00:07:40.731 "req_id": 1 00:07:40.731 } 00:07:40.731 Got JSON-RPC error response 00:07:40.731 response: 00:07:40.731 { 00:07:40.731 "code": -32602, 00:07:40.731 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:40.731 }' 00:07:40.731 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:40.731 { 00:07:40.731 "nqn": "nqn.2016-06.io.spdk:cnode8250", 00:07:40.731 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:40.731 "method": "nvmf_create_subsystem", 00:07:40.731 "req_id": 1 00:07:40.731 } 00:07:40.731 Got JSON-RPC error response 00:07:40.731 response: 00:07:40.731 { 00:07:40.731 "code": -32602, 00:07:40.731 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:40.731 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:40.731 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:40.731 23:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20038 00:07:40.989 [2024-07-15 23:10:56.069188] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20038: invalid model number 'SPDK_Controller' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:40.989 { 00:07:40.989 "nqn": "nqn.2016-06.io.spdk:cnode20038", 00:07:40.989 "model_number": "SPDK_Controller\u001f", 00:07:40.989 "method": "nvmf_create_subsystem", 00:07:40.989 "req_id": 1 00:07:40.989 } 00:07:40.989 Got JSON-RPC error response 00:07:40.989 response: 00:07:40.989 { 00:07:40.989 "code": -32602, 00:07:40.989 "message": "Invalid MN SPDK_Controller\u001f" 00:07:40.989 }' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:40.989 { 00:07:40.989 "nqn": "nqn.2016-06.io.spdk:cnode20038", 00:07:40.989 "model_number": "SPDK_Controller\u001f", 00:07:40.989 "method": "nvmf_create_subsystem", 00:07:40.989 "req_id": 1 00:07:40.989 } 00:07:40.989 Got JSON-RPC error response 00:07:40.989 response: 00:07:40.989 { 00:07:40.989 "code": -32602, 00:07:40.989 "message": "Invalid MN SPDK_Controller\u001f" 00:07:40.989 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:40.989 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bm0aMjf>|3cdzl}#09'\''q,' 00:07:40.990 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bm0aMjf>|3cdzl}#09'\''q,' nqn.2016-06.io.spdk:cnode27957 00:07:41.249 [2024-07-15 23:10:56.406262] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27957: invalid serial number 'bm0aMjf>|3cdzl}#09'q,' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:41.249 { 00:07:41.249 "nqn": "nqn.2016-06.io.spdk:cnode27957", 00:07:41.249 "serial_number": "bm0aMjf>|3cdzl}#09'\''q,", 00:07:41.249 "method": "nvmf_create_subsystem", 00:07:41.249 "req_id": 1 00:07:41.249 } 00:07:41.249 Got JSON-RPC error response 00:07:41.249 response: 00:07:41.249 { 00:07:41.249 "code": -32602, 00:07:41.249 "message": "Invalid SN bm0aMjf>|3cdzl}#09'\''q," 00:07:41.249 }' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:41.249 { 00:07:41.249 "nqn": "nqn.2016-06.io.spdk:cnode27957", 00:07:41.249 "serial_number": "bm0aMjf>|3cdzl}#09'q,", 00:07:41.249 "method": "nvmf_create_subsystem", 00:07:41.249 "req_id": 1 00:07:41.249 } 00:07:41.249 Got JSON-RPC error response 00:07:41.249 response: 00:07:41.249 { 00:07:41.249 "code": -32602, 00:07:41.249 "message": "Invalid SN bm0aMjf>|3cdzl}#09'q," 00:07:41.249 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:41.249 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.250 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:07:41.508 23:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'VV.7F$?tXTgMLRK037"H-j#< /dev/null' 00:07:44.104 23:10:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.640 23:11:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.640 00:07:46.640 real 0m9.333s 00:07:46.640 user 0m22.829s 00:07:46.640 sys 0m2.464s 00:07:46.640 23:11:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.640 23:11:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:46.640 ************************************ 00:07:46.640 END TEST nvmf_invalid 00:07:46.640 ************************************ 00:07:46.640 23:11:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:46.640 23:11:01 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.640 23:11:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.640 23:11:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.640 23:11:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.640 ************************************ 00:07:46.640 START TEST nvmf_abort 00:07:46.640 ************************************ 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.640 * Looking for test storage... 00:07:46.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.640 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.641 23:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:48.626 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:48.626 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:48.626 Found net devices under 0000:84:00.0: cvl_0_0 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:48.626 Found net devices under 0000:84:00.1: cvl_0_1 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:07:48.626 00:07:48.626 --- 10.0.0.2 ping statistics --- 00:07:48.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.626 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:48.626 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:48.626 00:07:48.627 --- 10.0.0.1 ping statistics --- 00:07:48.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.627 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2257057 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2257057 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2257057 ']' 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.627 23:11:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.627 [2024-07-15 23:11:03.738949] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:07:48.627 [2024-07-15 23:11:03.739043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.627 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.627 [2024-07-15 23:11:03.802896] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.627 [2024-07-15 23:11:03.915552] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.627 [2024-07-15 23:11:03.915612] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.627 [2024-07-15 23:11:03.915634] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.627 [2024-07-15 23:11:03.915652] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.627 [2024-07-15 23:11:03.915667] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.627 [2024-07-15 23:11:03.915772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.627 [2024-07-15 23:11:03.915835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.627 [2024-07-15 23:11:03.915840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.885 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.885 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:48.885 23:11:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 [2024-07-15 23:11:04.067975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 Malloc0 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 Delay0 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 [2024-07-15 23:11:04.136001] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.886 23:11:04 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.144 [2024-07-15 23:11:04.281853] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:51.675 Initializing NVMe Controllers 00:07:51.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.675 controller IO queue size 128 less than required 00:07:51.676 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:51.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:51.676 Initialization complete. Launching workers. 00:07:51.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33481 00:07:51.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33546, failed to submit 62 00:07:51.676 success 33485, unsuccess 61, failed 0 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.676 rmmod nvme_tcp 00:07:51.676 rmmod nvme_fabrics 00:07:51.676 rmmod nvme_keyring 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2257057 ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2257057 ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2257057' 00:07:51.676 killing process with pid 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2257057 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.676 23:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.210 23:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.210 00:07:54.210 real 0m7.525s 00:07:54.210 user 0m11.068s 00:07:54.210 sys 0m2.656s 00:07:54.210 23:11:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.210 23:11:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.210 ************************************ 00:07:54.210 END TEST nvmf_abort 00:07:54.210 ************************************ 00:07:54.210 23:11:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:54.210 23:11:08 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.210 23:11:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.210 23:11:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.210 23:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.210 ************************************ 00:07:54.210 START TEST nvmf_ns_hotplug_stress 00:07:54.210 ************************************ 00:07:54.210 23:11:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.210 * Looking for test storage... 00:07:54.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.210 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.210 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:54.210 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.210 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.210 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.211 23:11:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:56.115 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:56.115 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:56.115 Found net devices under 0000:84:00.0: cvl_0_0 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:56.115 Found net devices under 0000:84:00.1: cvl_0_1 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:07:56.115 00:07:56.115 --- 10.0.0.2 ping statistics --- 00:07:56.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.115 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:07:56.115 00:07:56.115 --- 10.0.0.1 ping statistics --- 00:07:56.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.115 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.115 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2259414 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2259414 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2259414 ']' 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.116 23:11:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.116 [2024-07-15 23:11:11.286557] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:07:56.116 [2024-07-15 23:11:11.286641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.116 [2024-07-15 23:11:11.356820] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.375 [2024-07-15 23:11:11.477518] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.375 [2024-07-15 23:11:11.477595] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.375 [2024-07-15 23:11:11.477620] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.375 [2024-07-15 23:11:11.477644] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.375 [2024-07-15 23:11:11.477661] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.375 [2024-07-15 23:11:11.477785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.375 [2024-07-15 23:11:11.477856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.375 [2024-07-15 23:11:11.477863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.943 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.943 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:56.943 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.943 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.943 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.202 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:57.202 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:57.202 [2024-07-15 23:11:12.490802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.202 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.460 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.716 [2024-07-15 23:11:12.977303] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.717 23:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.974 23:11:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:58.230 Malloc0 00:07:58.230 23:11:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.487 Delay0 00:07:58.487 23:11:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.744 23:11:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:59.002 NULL1 00:07:59.002 23:11:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:59.259 23:11:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2259843 00:07:59.259 23:11:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:59.259 23:11:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:07:59.259 23:11:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.259 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.638 Read completed with error (sct=0, sc=11) 00:08:00.638 23:11:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.896 23:11:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:00.896 23:11:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:01.153 true 00:08:01.153 23:11:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:01.154 23:11:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.718 23:11:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.976 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:01.976 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:02.233 true 00:08:02.233 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:02.233 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.490 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.748 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:02.748 23:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:03.005 true 00:08:03.005 23:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:03.005 23:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.939 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.197 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:04.197 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:04.454 true 00:08:04.454 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:04.454 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.712 23:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.970 23:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:04.970 23:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:05.226 true 00:08:05.226 23:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:05.226 23:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.156 23:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.413 23:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:06.413 23:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:06.670 true 00:08:06.670 23:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:06.670 23:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.927 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.184 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:07.184 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:07.442 true 00:08:07.442 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:07.442 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.699 23:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.955 23:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:07.955 23:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:08.210 true 00:08:08.210 23:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:08.210 23:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.139 23:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.653 23:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:09.653 23:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:09.653 true 00:08:09.910 23:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:09.910 23:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.473 23:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.730 23:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:10.730 23:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:10.997 true 00:08:10.997 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:10.997 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.310 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.595 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:11.595 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:11.853 true 00:08:11.853 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:11.853 23:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.782 23:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.039 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:13.039 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:13.039 true 00:08:13.296 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:13.296 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.296 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.552 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:13.552 23:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:13.809 true 00:08:13.809 23:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:13.809 23:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.738 23:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.997 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:14.997 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:15.254 true 00:08:15.254 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:15.254 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.510 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.766 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:15.766 23:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:16.022 true 00:08:16.022 23:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:16.022 23:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.952 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.209 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:17.209 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:17.466 true 00:08:17.466 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:17.466 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.723 23:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.723 23:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:17.723 23:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:17.981 true 00:08:18.239 23:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:18.239 23:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.172 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.172 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:19.172 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:19.428 true 00:08:19.428 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:19.428 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.685 23:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.942 23:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:19.942 23:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:20.199 true 00:08:20.199 23:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:20.199 23:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.130 23:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.388 23:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:21.388 23:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:21.644 true 00:08:21.644 23:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:21.645 23:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.902 23:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.160 23:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:22.160 23:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:22.417 true 00:08:22.417 23:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:22.417 23:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.350 23:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.607 23:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:23.607 23:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:23.864 true 00:08:23.864 23:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:23.864 23:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.122 23:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.379 23:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:24.379 23:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:24.379 true 00:08:24.636 23:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:24.636 23:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.567 23:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.567 23:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:25.567 23:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:25.824 true 00:08:25.824 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:25.824 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.081 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.351 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:26.351 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:26.608 true 00:08:26.609 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:26.609 23:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.540 23:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.797 23:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:27.797 23:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:28.055 true 00:08:28.055 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:28.055 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.312 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.570 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:28.570 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:28.828 true 00:08:28.828 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:28.828 23:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.086 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.343 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:29.343 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:29.602 Initializing NVMe Controllers 00:08:29.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.602 Controller IO queue size 128, less than required. 00:08:29.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.602 Controller IO queue size 128, less than required. 00:08:29.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:29.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:29.602 Initialization complete. Launching workers. 00:08:29.602 ======================================================== 00:08:29.602 Latency(us) 00:08:29.602 Device Information : IOPS MiB/s Average min max 00:08:29.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1111.52 0.54 61490.09 2252.11 1060063.78 00:08:29.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11414.96 5.57 11214.20 1572.38 447698.00 00:08:29.602 ======================================================== 00:08:29.602 Total : 12526.48 6.12 15675.37 1572.38 1060063.78 00:08:29.602 00:08:29.602 true 00:08:29.602 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2259843 00:08:29.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2259843) - No such process 00:08:29.602 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2259843 00:08:29.602 23:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.860 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.118 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:30.118 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:30.118 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:30.118 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.118 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:30.376 null0 00:08:30.376 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.376 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.376 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:30.634 null1 00:08:30.634 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.634 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.634 23:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:30.890 null2 00:08:30.890 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.890 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.890 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:31.148 null3 00:08:31.148 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.148 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.148 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:31.404 null4 00:08:31.404 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.404 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.404 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:31.661 null5 00:08:31.661 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.661 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.661 23:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:31.950 null6 00:08:31.950 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.950 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.950 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:31.950 null7 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.232 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2263773 2263774 2263776 2263778 2263780 2263782 2263784 2263786 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.233 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.547 23:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.804 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.061 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.318 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.575 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.833 23:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.091 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.349 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.607 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.864 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.865 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.865 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.865 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.865 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.865 23:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.865 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.122 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.379 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.380 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.380 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.380 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.380 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.380 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.637 23:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.895 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.152 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.152 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.152 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.152 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.153 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.153 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.153 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.153 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.410 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.411 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.411 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.411 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.411 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.668 23:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.926 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.184 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.443 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.443 rmmod nvme_tcp 00:08:37.443 rmmod nvme_fabrics 00:08:37.443 rmmod nvme_keyring 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2259414 ']' 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2259414 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2259414 ']' 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2259414 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259414 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259414' 00:08:37.701 killing process with pid 2259414 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2259414 00:08:37.701 23:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2259414 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.960 23:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.858 23:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.858 00:08:39.858 real 0m46.183s 00:08:39.858 user 3m29.984s 00:08:39.858 sys 0m16.857s 00:08:39.858 23:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.858 23:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 ************************************ 00:08:39.858 END TEST nvmf_ns_hotplug_stress 00:08:39.858 ************************************ 00:08:39.858 23:11:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.858 23:11:55 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:39.858 23:11:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.858 23:11:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.858 23:11:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.116 ************************************ 00:08:40.116 START TEST nvmf_connect_stress 00:08:40.116 ************************************ 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:40.116 * Looking for test storage... 00:08:40.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.116 23:11:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.117 23:11:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.017 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:42.018 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:42.018 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:42.018 Found net devices under 0000:84:00.0: cvl_0_0 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:42.018 Found net devices under 0000:84:00.1: cvl_0_1 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.018 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.275 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:42.275 00:08:42.275 --- 10.0.0.2 ping statistics --- 00:08:42.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.275 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:42.276 00:08:42.276 --- 10.0.0.1 ping statistics --- 00:08:42.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.276 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2266550 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2266550 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2266550 ']' 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.276 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.276 [2024-07-15 23:11:57.510069] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:08:42.276 [2024-07-15 23:11:57.510153] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.276 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.276 [2024-07-15 23:11:57.573135] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.533 [2024-07-15 23:11:57.683392] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.533 [2024-07-15 23:11:57.683443] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.533 [2024-07-15 23:11:57.683472] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.533 [2024-07-15 23:11:57.683483] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.533 [2024-07-15 23:11:57.683492] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.533 [2024-07-15 23:11:57.683582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.533 [2024-07-15 23:11:57.683646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.533 [2024-07-15 23:11:57.683649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 [2024-07-15 23:11:57.829967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.533 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.790 [2024-07-15 23:11:57.860911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.790 NULL1 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2266584 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.790 23:11:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.047 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.047 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:43.047 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.047 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.047 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.304 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.304 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:43.304 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.304 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.304 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.868 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.868 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:43.868 23:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.868 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.868 23:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.125 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.125 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:44.126 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.126 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.126 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.383 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.383 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:44.383 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.383 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.383 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.641 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:44.641 23:11:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.641 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.641 23:11:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.897 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.897 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:44.897 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.897 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.898 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.461 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.461 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:45.461 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.461 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.461 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.718 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.718 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:45.718 23:12:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.718 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.718 23:12:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.975 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.975 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:45.975 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.975 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.975 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.231 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.231 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:46.231 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.231 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.231 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.488 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.488 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:46.488 23:12:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.488 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.488 23:12:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.050 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.050 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:47.050 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.050 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.050 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.306 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.306 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:47.306 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.306 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.306 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.563 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.563 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:47.563 23:12:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.563 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.563 23:12:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.820 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.820 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:47.820 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.820 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.820 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.077 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:48.077 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.077 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.077 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.639 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.639 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:48.639 23:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.639 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.639 23:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.896 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.896 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:48.896 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.896 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.896 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.152 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.152 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:49.152 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.152 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.152 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.408 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.408 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:49.408 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.408 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.408 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.971 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.971 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:49.971 23:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.971 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.971 23:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.228 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.228 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:50.228 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.228 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.228 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.484 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.484 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:50.484 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.484 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.484 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.742 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.742 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:50.742 23:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.742 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.742 23:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.999 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.999 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:50.999 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.999 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.999 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.564 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.564 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:51.564 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.564 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.564 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.821 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.821 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:51.821 23:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.821 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.821 23:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.078 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.078 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:52.078 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.078 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.078 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.334 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.334 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:52.334 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.334 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.334 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.593 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.593 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:52.593 23:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.593 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.593 23:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.882 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2266584 00:08:53.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2266584) - No such process 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2266584 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.147 rmmod nvme_tcp 00:08:53.147 rmmod nvme_fabrics 00:08:53.147 rmmod nvme_keyring 00:08:53.147 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2266550 ']' 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2266550 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2266550 ']' 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2266550 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2266550 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2266550' 00:08:53.148 killing process with pid 2266550 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2266550 00:08:53.148 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2266550 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.405 23:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.304 23:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.304 00:08:55.304 real 0m15.422s 00:08:55.304 user 0m38.136s 00:08:55.304 sys 0m6.403s 00:08:55.304 23:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.304 23:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.304 ************************************ 00:08:55.304 END TEST nvmf_connect_stress 00:08:55.304 ************************************ 00:08:55.563 23:12:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:55.563 23:12:10 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:55.563 23:12:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:55.563 23:12:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.563 23:12:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.563 ************************************ 00:08:55.563 START TEST nvmf_fused_ordering 00:08:55.563 ************************************ 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:55.563 * Looking for test storage... 00:08:55.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.563 23:12:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.093 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:58.094 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:58.094 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.094 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:58.095 Found net devices under 0000:84:00.0: cvl_0_0 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:58.095 Found net devices under 0000:84:00.1: cvl_0_1 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:08:58.095 00:08:58.095 --- 10.0.0.2 ping statistics --- 00:08:58.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.095 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:58.095 00:08:58.095 --- 10.0.0.1 ping statistics --- 00:08:58.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.095 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2270482 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2270482 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2270482 ']' 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.095 23:12:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 [2024-07-15 23:12:13.009372] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:08:58.095 [2024-07-15 23:12:13.009465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.095 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.095 [2024-07-15 23:12:13.072388] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.095 [2024-07-15 23:12:13.179798] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.095 [2024-07-15 23:12:13.179860] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.095 [2024-07-15 23:12:13.179889] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.095 [2024-07-15 23:12:13.179900] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.095 [2024-07-15 23:12:13.179911] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.095 [2024-07-15 23:12:13.179937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 [2024-07-15 23:12:13.330539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 [2024-07-15 23:12:13.346730] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 NULL1 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.095 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:58.096 23:12:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.096 23:12:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:58.096 [2024-07-15 23:12:13.392725] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:08:58.096 [2024-07-15 23:12:13.392772] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270508 ] 00:08:58.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.611 Attached to nqn.2016-06.io.spdk:cnode1 00:08:58.611 Namespace ID: 1 size: 1GB 00:08:58.611 fused_ordering(0) 00:08:58.611 fused_ordering(1) 00:08:58.611 fused_ordering(2) 00:08:58.611 fused_ordering(3) 00:08:58.611 fused_ordering(4) 00:08:58.611 fused_ordering(5) 00:08:58.611 fused_ordering(6) 00:08:58.611 fused_ordering(7) 00:08:58.611 fused_ordering(8) 00:08:58.611 fused_ordering(9) 00:08:58.611 fused_ordering(10) 00:08:58.611 fused_ordering(11) 00:08:58.611 fused_ordering(12) 00:08:58.611 fused_ordering(13) 00:08:58.611 fused_ordering(14) 00:08:58.611 fused_ordering(15) 00:08:58.611 fused_ordering(16) 00:08:58.611 fused_ordering(17) 00:08:58.611 fused_ordering(18) 00:08:58.611 fused_ordering(19) 00:08:58.611 fused_ordering(20) 00:08:58.611 fused_ordering(21) 00:08:58.611 fused_ordering(22) 00:08:58.611 fused_ordering(23) 00:08:58.611 fused_ordering(24) 00:08:58.611 fused_ordering(25) 00:08:58.611 fused_ordering(26) 00:08:58.611 fused_ordering(27) 00:08:58.611 fused_ordering(28) 00:08:58.611 fused_ordering(29) 00:08:58.611 fused_ordering(30) 00:08:58.611 fused_ordering(31) 00:08:58.611 fused_ordering(32) 00:08:58.611 fused_ordering(33) 00:08:58.611 fused_ordering(34) 00:08:58.611 fused_ordering(35) 00:08:58.611 fused_ordering(36) 00:08:58.611 fused_ordering(37) 00:08:58.611 fused_ordering(38) 00:08:58.611 fused_ordering(39) 00:08:58.611 fused_ordering(40) 00:08:58.611 fused_ordering(41) 00:08:58.611 fused_ordering(42) 00:08:58.611 fused_ordering(43) 00:08:58.611 fused_ordering(44) 00:08:58.611 fused_ordering(45) 00:08:58.611 fused_ordering(46) 00:08:58.611 fused_ordering(47) 00:08:58.611 fused_ordering(48) 00:08:58.611 fused_ordering(49) 00:08:58.611 fused_ordering(50) 00:08:58.611 fused_ordering(51) 00:08:58.611 fused_ordering(52) 00:08:58.611 fused_ordering(53) 00:08:58.611 fused_ordering(54) 00:08:58.611 fused_ordering(55) 00:08:58.611 fused_ordering(56) 00:08:58.611 fused_ordering(57) 00:08:58.611 fused_ordering(58) 00:08:58.611 fused_ordering(59) 00:08:58.611 fused_ordering(60) 00:08:58.611 fused_ordering(61) 00:08:58.611 fused_ordering(62) 00:08:58.611 fused_ordering(63) 00:08:58.611 fused_ordering(64) 00:08:58.611 fused_ordering(65) 00:08:58.611 fused_ordering(66) 00:08:58.611 fused_ordering(67) 00:08:58.611 fused_ordering(68) 00:08:58.612 fused_ordering(69) 00:08:58.612 fused_ordering(70) 00:08:58.612 fused_ordering(71) 00:08:58.612 fused_ordering(72) 00:08:58.612 fused_ordering(73) 00:08:58.612 fused_ordering(74) 00:08:58.612 fused_ordering(75) 00:08:58.612 fused_ordering(76) 00:08:58.612 fused_ordering(77) 00:08:58.612 fused_ordering(78) 00:08:58.612 fused_ordering(79) 00:08:58.612 fused_ordering(80) 00:08:58.612 fused_ordering(81) 00:08:58.612 fused_ordering(82) 00:08:58.612 fused_ordering(83) 00:08:58.612 fused_ordering(84) 00:08:58.612 fused_ordering(85) 00:08:58.612 fused_ordering(86) 00:08:58.612 fused_ordering(87) 00:08:58.612 fused_ordering(88) 00:08:58.612 fused_ordering(89) 00:08:58.612 fused_ordering(90) 00:08:58.612 fused_ordering(91) 00:08:58.612 fused_ordering(92) 00:08:58.612 fused_ordering(93) 00:08:58.612 fused_ordering(94) 00:08:58.612 fused_ordering(95) 00:08:58.612 fused_ordering(96) 00:08:58.612 fused_ordering(97) 00:08:58.612 fused_ordering(98) 00:08:58.612 fused_ordering(99) 00:08:58.612 fused_ordering(100) 00:08:58.612 fused_ordering(101) 00:08:58.612 fused_ordering(102) 00:08:58.612 fused_ordering(103) 00:08:58.612 fused_ordering(104) 00:08:58.612 fused_ordering(105) 00:08:58.612 fused_ordering(106) 00:08:58.612 fused_ordering(107) 00:08:58.612 fused_ordering(108) 00:08:58.612 fused_ordering(109) 00:08:58.612 fused_ordering(110) 00:08:58.612 fused_ordering(111) 00:08:58.612 fused_ordering(112) 00:08:58.612 fused_ordering(113) 00:08:58.612 fused_ordering(114) 00:08:58.612 fused_ordering(115) 00:08:58.612 fused_ordering(116) 00:08:58.612 fused_ordering(117) 00:08:58.612 fused_ordering(118) 00:08:58.612 fused_ordering(119) 00:08:58.612 fused_ordering(120) 00:08:58.612 fused_ordering(121) 00:08:58.612 fused_ordering(122) 00:08:58.612 fused_ordering(123) 00:08:58.612 fused_ordering(124) 00:08:58.612 fused_ordering(125) 00:08:58.612 fused_ordering(126) 00:08:58.612 fused_ordering(127) 00:08:58.612 fused_ordering(128) 00:08:58.612 fused_ordering(129) 00:08:58.612 fused_ordering(130) 00:08:58.612 fused_ordering(131) 00:08:58.612 fused_ordering(132) 00:08:58.612 fused_ordering(133) 00:08:58.612 fused_ordering(134) 00:08:58.612 fused_ordering(135) 00:08:58.612 fused_ordering(136) 00:08:58.612 fused_ordering(137) 00:08:58.612 fused_ordering(138) 00:08:58.612 fused_ordering(139) 00:08:58.612 fused_ordering(140) 00:08:58.612 fused_ordering(141) 00:08:58.612 fused_ordering(142) 00:08:58.612 fused_ordering(143) 00:08:58.612 fused_ordering(144) 00:08:58.612 fused_ordering(145) 00:08:58.612 fused_ordering(146) 00:08:58.612 fused_ordering(147) 00:08:58.612 fused_ordering(148) 00:08:58.612 fused_ordering(149) 00:08:58.612 fused_ordering(150) 00:08:58.612 fused_ordering(151) 00:08:58.612 fused_ordering(152) 00:08:58.612 fused_ordering(153) 00:08:58.612 fused_ordering(154) 00:08:58.612 fused_ordering(155) 00:08:58.612 fused_ordering(156) 00:08:58.612 fused_ordering(157) 00:08:58.612 fused_ordering(158) 00:08:58.612 fused_ordering(159) 00:08:58.612 fused_ordering(160) 00:08:58.612 fused_ordering(161) 00:08:58.612 fused_ordering(162) 00:08:58.612 fused_ordering(163) 00:08:58.612 fused_ordering(164) 00:08:58.612 fused_ordering(165) 00:08:58.612 fused_ordering(166) 00:08:58.612 fused_ordering(167) 00:08:58.612 fused_ordering(168) 00:08:58.612 fused_ordering(169) 00:08:58.612 fused_ordering(170) 00:08:58.612 fused_ordering(171) 00:08:58.612 fused_ordering(172) 00:08:58.612 fused_ordering(173) 00:08:58.612 fused_ordering(174) 00:08:58.612 fused_ordering(175) 00:08:58.612 fused_ordering(176) 00:08:58.612 fused_ordering(177) 00:08:58.612 fused_ordering(178) 00:08:58.612 fused_ordering(179) 00:08:58.612 fused_ordering(180) 00:08:58.612 fused_ordering(181) 00:08:58.612 fused_ordering(182) 00:08:58.612 fused_ordering(183) 00:08:58.612 fused_ordering(184) 00:08:58.612 fused_ordering(185) 00:08:58.612 fused_ordering(186) 00:08:58.612 fused_ordering(187) 00:08:58.612 fused_ordering(188) 00:08:58.612 fused_ordering(189) 00:08:58.612 fused_ordering(190) 00:08:58.612 fused_ordering(191) 00:08:58.612 fused_ordering(192) 00:08:58.612 fused_ordering(193) 00:08:58.612 fused_ordering(194) 00:08:58.612 fused_ordering(195) 00:08:58.612 fused_ordering(196) 00:08:58.612 fused_ordering(197) 00:08:58.612 fused_ordering(198) 00:08:58.612 fused_ordering(199) 00:08:58.612 fused_ordering(200) 00:08:58.612 fused_ordering(201) 00:08:58.612 fused_ordering(202) 00:08:58.612 fused_ordering(203) 00:08:58.612 fused_ordering(204) 00:08:58.612 fused_ordering(205) 00:08:59.177 fused_ordering(206) 00:08:59.177 fused_ordering(207) 00:08:59.177 fused_ordering(208) 00:08:59.177 fused_ordering(209) 00:08:59.177 fused_ordering(210) 00:08:59.177 fused_ordering(211) 00:08:59.177 fused_ordering(212) 00:08:59.177 fused_ordering(213) 00:08:59.177 fused_ordering(214) 00:08:59.177 fused_ordering(215) 00:08:59.177 fused_ordering(216) 00:08:59.177 fused_ordering(217) 00:08:59.177 fused_ordering(218) 00:08:59.177 fused_ordering(219) 00:08:59.177 fused_ordering(220) 00:08:59.177 fused_ordering(221) 00:08:59.177 fused_ordering(222) 00:08:59.177 fused_ordering(223) 00:08:59.177 fused_ordering(224) 00:08:59.177 fused_ordering(225) 00:08:59.177 fused_ordering(226) 00:08:59.177 fused_ordering(227) 00:08:59.177 fused_ordering(228) 00:08:59.177 fused_ordering(229) 00:08:59.177 fused_ordering(230) 00:08:59.177 fused_ordering(231) 00:08:59.177 fused_ordering(232) 00:08:59.177 fused_ordering(233) 00:08:59.177 fused_ordering(234) 00:08:59.177 fused_ordering(235) 00:08:59.177 fused_ordering(236) 00:08:59.177 fused_ordering(237) 00:08:59.177 fused_ordering(238) 00:08:59.177 fused_ordering(239) 00:08:59.177 fused_ordering(240) 00:08:59.177 fused_ordering(241) 00:08:59.177 fused_ordering(242) 00:08:59.177 fused_ordering(243) 00:08:59.177 fused_ordering(244) 00:08:59.177 fused_ordering(245) 00:08:59.177 fused_ordering(246) 00:08:59.177 fused_ordering(247) 00:08:59.177 fused_ordering(248) 00:08:59.177 fused_ordering(249) 00:08:59.177 fused_ordering(250) 00:08:59.177 fused_ordering(251) 00:08:59.177 fused_ordering(252) 00:08:59.177 fused_ordering(253) 00:08:59.177 fused_ordering(254) 00:08:59.177 fused_ordering(255) 00:08:59.177 fused_ordering(256) 00:08:59.177 fused_ordering(257) 00:08:59.177 fused_ordering(258) 00:08:59.177 fused_ordering(259) 00:08:59.177 fused_ordering(260) 00:08:59.177 fused_ordering(261) 00:08:59.177 fused_ordering(262) 00:08:59.177 fused_ordering(263) 00:08:59.177 fused_ordering(264) 00:08:59.177 fused_ordering(265) 00:08:59.177 fused_ordering(266) 00:08:59.177 fused_ordering(267) 00:08:59.177 fused_ordering(268) 00:08:59.177 fused_ordering(269) 00:08:59.177 fused_ordering(270) 00:08:59.177 fused_ordering(271) 00:08:59.177 fused_ordering(272) 00:08:59.177 fused_ordering(273) 00:08:59.177 fused_ordering(274) 00:08:59.177 fused_ordering(275) 00:08:59.177 fused_ordering(276) 00:08:59.177 fused_ordering(277) 00:08:59.177 fused_ordering(278) 00:08:59.177 fused_ordering(279) 00:08:59.177 fused_ordering(280) 00:08:59.177 fused_ordering(281) 00:08:59.177 fused_ordering(282) 00:08:59.177 fused_ordering(283) 00:08:59.177 fused_ordering(284) 00:08:59.177 fused_ordering(285) 00:08:59.177 fused_ordering(286) 00:08:59.177 fused_ordering(287) 00:08:59.177 fused_ordering(288) 00:08:59.177 fused_ordering(289) 00:08:59.177 fused_ordering(290) 00:08:59.177 fused_ordering(291) 00:08:59.177 fused_ordering(292) 00:08:59.177 fused_ordering(293) 00:08:59.177 fused_ordering(294) 00:08:59.177 fused_ordering(295) 00:08:59.177 fused_ordering(296) 00:08:59.177 fused_ordering(297) 00:08:59.177 fused_ordering(298) 00:08:59.177 fused_ordering(299) 00:08:59.177 fused_ordering(300) 00:08:59.177 fused_ordering(301) 00:08:59.177 fused_ordering(302) 00:08:59.177 fused_ordering(303) 00:08:59.177 fused_ordering(304) 00:08:59.177 fused_ordering(305) 00:08:59.177 fused_ordering(306) 00:08:59.177 fused_ordering(307) 00:08:59.177 fused_ordering(308) 00:08:59.177 fused_ordering(309) 00:08:59.177 fused_ordering(310) 00:08:59.177 fused_ordering(311) 00:08:59.177 fused_ordering(312) 00:08:59.177 fused_ordering(313) 00:08:59.177 fused_ordering(314) 00:08:59.177 fused_ordering(315) 00:08:59.177 fused_ordering(316) 00:08:59.177 fused_ordering(317) 00:08:59.177 fused_ordering(318) 00:08:59.177 fused_ordering(319) 00:08:59.177 fused_ordering(320) 00:08:59.177 fused_ordering(321) 00:08:59.177 fused_ordering(322) 00:08:59.177 fused_ordering(323) 00:08:59.177 fused_ordering(324) 00:08:59.177 fused_ordering(325) 00:08:59.177 fused_ordering(326) 00:08:59.177 fused_ordering(327) 00:08:59.177 fused_ordering(328) 00:08:59.177 fused_ordering(329) 00:08:59.177 fused_ordering(330) 00:08:59.178 fused_ordering(331) 00:08:59.178 fused_ordering(332) 00:08:59.178 fused_ordering(333) 00:08:59.178 fused_ordering(334) 00:08:59.178 fused_ordering(335) 00:08:59.178 fused_ordering(336) 00:08:59.178 fused_ordering(337) 00:08:59.178 fused_ordering(338) 00:08:59.178 fused_ordering(339) 00:08:59.178 fused_ordering(340) 00:08:59.178 fused_ordering(341) 00:08:59.178 fused_ordering(342) 00:08:59.178 fused_ordering(343) 00:08:59.178 fused_ordering(344) 00:08:59.178 fused_ordering(345) 00:08:59.178 fused_ordering(346) 00:08:59.178 fused_ordering(347) 00:08:59.178 fused_ordering(348) 00:08:59.178 fused_ordering(349) 00:08:59.178 fused_ordering(350) 00:08:59.178 fused_ordering(351) 00:08:59.178 fused_ordering(352) 00:08:59.178 fused_ordering(353) 00:08:59.178 fused_ordering(354) 00:08:59.178 fused_ordering(355) 00:08:59.178 fused_ordering(356) 00:08:59.178 fused_ordering(357) 00:08:59.178 fused_ordering(358) 00:08:59.178 fused_ordering(359) 00:08:59.178 fused_ordering(360) 00:08:59.178 fused_ordering(361) 00:08:59.178 fused_ordering(362) 00:08:59.178 fused_ordering(363) 00:08:59.178 fused_ordering(364) 00:08:59.178 fused_ordering(365) 00:08:59.178 fused_ordering(366) 00:08:59.178 fused_ordering(367) 00:08:59.178 fused_ordering(368) 00:08:59.178 fused_ordering(369) 00:08:59.178 fused_ordering(370) 00:08:59.178 fused_ordering(371) 00:08:59.178 fused_ordering(372) 00:08:59.178 fused_ordering(373) 00:08:59.178 fused_ordering(374) 00:08:59.178 fused_ordering(375) 00:08:59.178 fused_ordering(376) 00:08:59.178 fused_ordering(377) 00:08:59.178 fused_ordering(378) 00:08:59.178 fused_ordering(379) 00:08:59.178 fused_ordering(380) 00:08:59.178 fused_ordering(381) 00:08:59.178 fused_ordering(382) 00:08:59.178 fused_ordering(383) 00:08:59.178 fused_ordering(384) 00:08:59.178 fused_ordering(385) 00:08:59.178 fused_ordering(386) 00:08:59.178 fused_ordering(387) 00:08:59.178 fused_ordering(388) 00:08:59.178 fused_ordering(389) 00:08:59.178 fused_ordering(390) 00:08:59.178 fused_ordering(391) 00:08:59.178 fused_ordering(392) 00:08:59.178 fused_ordering(393) 00:08:59.178 fused_ordering(394) 00:08:59.178 fused_ordering(395) 00:08:59.178 fused_ordering(396) 00:08:59.178 fused_ordering(397) 00:08:59.178 fused_ordering(398) 00:08:59.178 fused_ordering(399) 00:08:59.178 fused_ordering(400) 00:08:59.178 fused_ordering(401) 00:08:59.178 fused_ordering(402) 00:08:59.178 fused_ordering(403) 00:08:59.178 fused_ordering(404) 00:08:59.178 fused_ordering(405) 00:08:59.178 fused_ordering(406) 00:08:59.178 fused_ordering(407) 00:08:59.178 fused_ordering(408) 00:08:59.178 fused_ordering(409) 00:08:59.178 fused_ordering(410) 00:08:59.743 fused_ordering(411) 00:08:59.743 fused_ordering(412) 00:08:59.743 fused_ordering(413) 00:08:59.743 fused_ordering(414) 00:08:59.743 fused_ordering(415) 00:08:59.743 fused_ordering(416) 00:08:59.743 fused_ordering(417) 00:08:59.743 fused_ordering(418) 00:08:59.743 fused_ordering(419) 00:08:59.743 fused_ordering(420) 00:08:59.743 fused_ordering(421) 00:08:59.743 fused_ordering(422) 00:08:59.743 fused_ordering(423) 00:08:59.743 fused_ordering(424) 00:08:59.743 fused_ordering(425) 00:08:59.743 fused_ordering(426) 00:08:59.743 fused_ordering(427) 00:08:59.743 fused_ordering(428) 00:08:59.743 fused_ordering(429) 00:08:59.743 fused_ordering(430) 00:08:59.743 fused_ordering(431) 00:08:59.743 fused_ordering(432) 00:08:59.743 fused_ordering(433) 00:08:59.743 fused_ordering(434) 00:08:59.743 fused_ordering(435) 00:08:59.743 fused_ordering(436) 00:08:59.743 fused_ordering(437) 00:08:59.743 fused_ordering(438) 00:08:59.743 fused_ordering(439) 00:08:59.743 fused_ordering(440) 00:08:59.743 fused_ordering(441) 00:08:59.743 fused_ordering(442) 00:08:59.743 fused_ordering(443) 00:08:59.743 fused_ordering(444) 00:08:59.743 fused_ordering(445) 00:08:59.743 fused_ordering(446) 00:08:59.743 fused_ordering(447) 00:08:59.743 fused_ordering(448) 00:08:59.743 fused_ordering(449) 00:08:59.743 fused_ordering(450) 00:08:59.743 fused_ordering(451) 00:08:59.743 fused_ordering(452) 00:08:59.743 fused_ordering(453) 00:08:59.743 fused_ordering(454) 00:08:59.743 fused_ordering(455) 00:08:59.743 fused_ordering(456) 00:08:59.743 fused_ordering(457) 00:08:59.743 fused_ordering(458) 00:08:59.743 fused_ordering(459) 00:08:59.743 fused_ordering(460) 00:08:59.743 fused_ordering(461) 00:08:59.743 fused_ordering(462) 00:08:59.743 fused_ordering(463) 00:08:59.743 fused_ordering(464) 00:08:59.743 fused_ordering(465) 00:08:59.743 fused_ordering(466) 00:08:59.743 fused_ordering(467) 00:08:59.743 fused_ordering(468) 00:08:59.743 fused_ordering(469) 00:08:59.743 fused_ordering(470) 00:08:59.743 fused_ordering(471) 00:08:59.743 fused_ordering(472) 00:08:59.743 fused_ordering(473) 00:08:59.743 fused_ordering(474) 00:08:59.743 fused_ordering(475) 00:08:59.743 fused_ordering(476) 00:08:59.743 fused_ordering(477) 00:08:59.743 fused_ordering(478) 00:08:59.743 fused_ordering(479) 00:08:59.743 fused_ordering(480) 00:08:59.743 fused_ordering(481) 00:08:59.743 fused_ordering(482) 00:08:59.743 fused_ordering(483) 00:08:59.743 fused_ordering(484) 00:08:59.743 fused_ordering(485) 00:08:59.743 fused_ordering(486) 00:08:59.743 fused_ordering(487) 00:08:59.743 fused_ordering(488) 00:08:59.743 fused_ordering(489) 00:08:59.743 fused_ordering(490) 00:08:59.743 fused_ordering(491) 00:08:59.743 fused_ordering(492) 00:08:59.744 fused_ordering(493) 00:08:59.744 fused_ordering(494) 00:08:59.744 fused_ordering(495) 00:08:59.744 fused_ordering(496) 00:08:59.744 fused_ordering(497) 00:08:59.744 fused_ordering(498) 00:08:59.744 fused_ordering(499) 00:08:59.744 fused_ordering(500) 00:08:59.744 fused_ordering(501) 00:08:59.744 fused_ordering(502) 00:08:59.744 fused_ordering(503) 00:08:59.744 fused_ordering(504) 00:08:59.744 fused_ordering(505) 00:08:59.744 fused_ordering(506) 00:08:59.744 fused_ordering(507) 00:08:59.744 fused_ordering(508) 00:08:59.744 fused_ordering(509) 00:08:59.744 fused_ordering(510) 00:08:59.744 fused_ordering(511) 00:08:59.744 fused_ordering(512) 00:08:59.744 fused_ordering(513) 00:08:59.744 fused_ordering(514) 00:08:59.744 fused_ordering(515) 00:08:59.744 fused_ordering(516) 00:08:59.744 fused_ordering(517) 00:08:59.744 fused_ordering(518) 00:08:59.744 fused_ordering(519) 00:08:59.744 fused_ordering(520) 00:08:59.744 fused_ordering(521) 00:08:59.744 fused_ordering(522) 00:08:59.744 fused_ordering(523) 00:08:59.744 fused_ordering(524) 00:08:59.744 fused_ordering(525) 00:08:59.744 fused_ordering(526) 00:08:59.744 fused_ordering(527) 00:08:59.744 fused_ordering(528) 00:08:59.744 fused_ordering(529) 00:08:59.744 fused_ordering(530) 00:08:59.744 fused_ordering(531) 00:08:59.744 fused_ordering(532) 00:08:59.744 fused_ordering(533) 00:08:59.744 fused_ordering(534) 00:08:59.744 fused_ordering(535) 00:08:59.744 fused_ordering(536) 00:08:59.744 fused_ordering(537) 00:08:59.744 fused_ordering(538) 00:08:59.744 fused_ordering(539) 00:08:59.744 fused_ordering(540) 00:08:59.744 fused_ordering(541) 00:08:59.744 fused_ordering(542) 00:08:59.744 fused_ordering(543) 00:08:59.744 fused_ordering(544) 00:08:59.744 fused_ordering(545) 00:08:59.744 fused_ordering(546) 00:08:59.744 fused_ordering(547) 00:08:59.744 fused_ordering(548) 00:08:59.744 fused_ordering(549) 00:08:59.744 fused_ordering(550) 00:08:59.744 fused_ordering(551) 00:08:59.744 fused_ordering(552) 00:08:59.744 fused_ordering(553) 00:08:59.744 fused_ordering(554) 00:08:59.744 fused_ordering(555) 00:08:59.744 fused_ordering(556) 00:08:59.744 fused_ordering(557) 00:08:59.744 fused_ordering(558) 00:08:59.744 fused_ordering(559) 00:08:59.744 fused_ordering(560) 00:08:59.744 fused_ordering(561) 00:08:59.744 fused_ordering(562) 00:08:59.744 fused_ordering(563) 00:08:59.744 fused_ordering(564) 00:08:59.744 fused_ordering(565) 00:08:59.744 fused_ordering(566) 00:08:59.744 fused_ordering(567) 00:08:59.744 fused_ordering(568) 00:08:59.744 fused_ordering(569) 00:08:59.744 fused_ordering(570) 00:08:59.744 fused_ordering(571) 00:08:59.744 fused_ordering(572) 00:08:59.744 fused_ordering(573) 00:08:59.744 fused_ordering(574) 00:08:59.744 fused_ordering(575) 00:08:59.744 fused_ordering(576) 00:08:59.744 fused_ordering(577) 00:08:59.744 fused_ordering(578) 00:08:59.744 fused_ordering(579) 00:08:59.744 fused_ordering(580) 00:08:59.744 fused_ordering(581) 00:08:59.744 fused_ordering(582) 00:08:59.744 fused_ordering(583) 00:08:59.744 fused_ordering(584) 00:08:59.744 fused_ordering(585) 00:08:59.744 fused_ordering(586) 00:08:59.744 fused_ordering(587) 00:08:59.744 fused_ordering(588) 00:08:59.744 fused_ordering(589) 00:08:59.744 fused_ordering(590) 00:08:59.744 fused_ordering(591) 00:08:59.744 fused_ordering(592) 00:08:59.744 fused_ordering(593) 00:08:59.744 fused_ordering(594) 00:08:59.744 fused_ordering(595) 00:08:59.744 fused_ordering(596) 00:08:59.744 fused_ordering(597) 00:08:59.744 fused_ordering(598) 00:08:59.744 fused_ordering(599) 00:08:59.744 fused_ordering(600) 00:08:59.744 fused_ordering(601) 00:08:59.744 fused_ordering(602) 00:08:59.744 fused_ordering(603) 00:08:59.744 fused_ordering(604) 00:08:59.744 fused_ordering(605) 00:08:59.744 fused_ordering(606) 00:08:59.744 fused_ordering(607) 00:08:59.744 fused_ordering(608) 00:08:59.744 fused_ordering(609) 00:08:59.744 fused_ordering(610) 00:08:59.744 fused_ordering(611) 00:08:59.744 fused_ordering(612) 00:08:59.744 fused_ordering(613) 00:08:59.744 fused_ordering(614) 00:08:59.744 fused_ordering(615) 00:09:00.310 fused_ordering(616) 00:09:00.310 fused_ordering(617) 00:09:00.310 fused_ordering(618) 00:09:00.310 fused_ordering(619) 00:09:00.310 fused_ordering(620) 00:09:00.310 fused_ordering(621) 00:09:00.310 fused_ordering(622) 00:09:00.310 fused_ordering(623) 00:09:00.310 fused_ordering(624) 00:09:00.310 fused_ordering(625) 00:09:00.310 fused_ordering(626) 00:09:00.310 fused_ordering(627) 00:09:00.310 fused_ordering(628) 00:09:00.310 fused_ordering(629) 00:09:00.310 fused_ordering(630) 00:09:00.310 fused_ordering(631) 00:09:00.310 fused_ordering(632) 00:09:00.310 fused_ordering(633) 00:09:00.310 fused_ordering(634) 00:09:00.310 fused_ordering(635) 00:09:00.310 fused_ordering(636) 00:09:00.310 fused_ordering(637) 00:09:00.310 fused_ordering(638) 00:09:00.310 fused_ordering(639) 00:09:00.310 fused_ordering(640) 00:09:00.310 fused_ordering(641) 00:09:00.310 fused_ordering(642) 00:09:00.310 fused_ordering(643) 00:09:00.310 fused_ordering(644) 00:09:00.310 fused_ordering(645) 00:09:00.310 fused_ordering(646) 00:09:00.310 fused_ordering(647) 00:09:00.310 fused_ordering(648) 00:09:00.310 fused_ordering(649) 00:09:00.310 fused_ordering(650) 00:09:00.310 fused_ordering(651) 00:09:00.310 fused_ordering(652) 00:09:00.310 fused_ordering(653) 00:09:00.310 fused_ordering(654) 00:09:00.310 fused_ordering(655) 00:09:00.310 fused_ordering(656) 00:09:00.310 fused_ordering(657) 00:09:00.310 fused_ordering(658) 00:09:00.310 fused_ordering(659) 00:09:00.310 fused_ordering(660) 00:09:00.310 fused_ordering(661) 00:09:00.310 fused_ordering(662) 00:09:00.310 fused_ordering(663) 00:09:00.310 fused_ordering(664) 00:09:00.310 fused_ordering(665) 00:09:00.310 fused_ordering(666) 00:09:00.310 fused_ordering(667) 00:09:00.310 fused_ordering(668) 00:09:00.310 fused_ordering(669) 00:09:00.310 fused_ordering(670) 00:09:00.310 fused_ordering(671) 00:09:00.310 fused_ordering(672) 00:09:00.310 fused_ordering(673) 00:09:00.310 fused_ordering(674) 00:09:00.310 fused_ordering(675) 00:09:00.310 fused_ordering(676) 00:09:00.310 fused_ordering(677) 00:09:00.310 fused_ordering(678) 00:09:00.310 fused_ordering(679) 00:09:00.310 fused_ordering(680) 00:09:00.310 fused_ordering(681) 00:09:00.310 fused_ordering(682) 00:09:00.310 fused_ordering(683) 00:09:00.310 fused_ordering(684) 00:09:00.310 fused_ordering(685) 00:09:00.310 fused_ordering(686) 00:09:00.310 fused_ordering(687) 00:09:00.310 fused_ordering(688) 00:09:00.310 fused_ordering(689) 00:09:00.310 fused_ordering(690) 00:09:00.310 fused_ordering(691) 00:09:00.310 fused_ordering(692) 00:09:00.310 fused_ordering(693) 00:09:00.310 fused_ordering(694) 00:09:00.310 fused_ordering(695) 00:09:00.310 fused_ordering(696) 00:09:00.310 fused_ordering(697) 00:09:00.310 fused_ordering(698) 00:09:00.310 fused_ordering(699) 00:09:00.310 fused_ordering(700) 00:09:00.310 fused_ordering(701) 00:09:00.310 fused_ordering(702) 00:09:00.310 fused_ordering(703) 00:09:00.310 fused_ordering(704) 00:09:00.310 fused_ordering(705) 00:09:00.310 fused_ordering(706) 00:09:00.310 fused_ordering(707) 00:09:00.310 fused_ordering(708) 00:09:00.310 fused_ordering(709) 00:09:00.310 fused_ordering(710) 00:09:00.310 fused_ordering(711) 00:09:00.310 fused_ordering(712) 00:09:00.310 fused_ordering(713) 00:09:00.310 fused_ordering(714) 00:09:00.310 fused_ordering(715) 00:09:00.310 fused_ordering(716) 00:09:00.310 fused_ordering(717) 00:09:00.310 fused_ordering(718) 00:09:00.310 fused_ordering(719) 00:09:00.310 fused_ordering(720) 00:09:00.310 fused_ordering(721) 00:09:00.310 fused_ordering(722) 00:09:00.310 fused_ordering(723) 00:09:00.310 fused_ordering(724) 00:09:00.310 fused_ordering(725) 00:09:00.310 fused_ordering(726) 00:09:00.310 fused_ordering(727) 00:09:00.310 fused_ordering(728) 00:09:00.310 fused_ordering(729) 00:09:00.310 fused_ordering(730) 00:09:00.310 fused_ordering(731) 00:09:00.310 fused_ordering(732) 00:09:00.310 fused_ordering(733) 00:09:00.310 fused_ordering(734) 00:09:00.310 fused_ordering(735) 00:09:00.310 fused_ordering(736) 00:09:00.310 fused_ordering(737) 00:09:00.310 fused_ordering(738) 00:09:00.310 fused_ordering(739) 00:09:00.310 fused_ordering(740) 00:09:00.310 fused_ordering(741) 00:09:00.310 fused_ordering(742) 00:09:00.310 fused_ordering(743) 00:09:00.310 fused_ordering(744) 00:09:00.310 fused_ordering(745) 00:09:00.310 fused_ordering(746) 00:09:00.310 fused_ordering(747) 00:09:00.310 fused_ordering(748) 00:09:00.310 fused_ordering(749) 00:09:00.310 fused_ordering(750) 00:09:00.310 fused_ordering(751) 00:09:00.310 fused_ordering(752) 00:09:00.310 fused_ordering(753) 00:09:00.310 fused_ordering(754) 00:09:00.310 fused_ordering(755) 00:09:00.310 fused_ordering(756) 00:09:00.310 fused_ordering(757) 00:09:00.310 fused_ordering(758) 00:09:00.310 fused_ordering(759) 00:09:00.310 fused_ordering(760) 00:09:00.310 fused_ordering(761) 00:09:00.310 fused_ordering(762) 00:09:00.310 fused_ordering(763) 00:09:00.310 fused_ordering(764) 00:09:00.310 fused_ordering(765) 00:09:00.310 fused_ordering(766) 00:09:00.310 fused_ordering(767) 00:09:00.310 fused_ordering(768) 00:09:00.310 fused_ordering(769) 00:09:00.310 fused_ordering(770) 00:09:00.310 fused_ordering(771) 00:09:00.310 fused_ordering(772) 00:09:00.310 fused_ordering(773) 00:09:00.310 fused_ordering(774) 00:09:00.310 fused_ordering(775) 00:09:00.310 fused_ordering(776) 00:09:00.310 fused_ordering(777) 00:09:00.310 fused_ordering(778) 00:09:00.310 fused_ordering(779) 00:09:00.310 fused_ordering(780) 00:09:00.310 fused_ordering(781) 00:09:00.310 fused_ordering(782) 00:09:00.310 fused_ordering(783) 00:09:00.310 fused_ordering(784) 00:09:00.310 fused_ordering(785) 00:09:00.310 fused_ordering(786) 00:09:00.310 fused_ordering(787) 00:09:00.310 fused_ordering(788) 00:09:00.310 fused_ordering(789) 00:09:00.310 fused_ordering(790) 00:09:00.310 fused_ordering(791) 00:09:00.310 fused_ordering(792) 00:09:00.310 fused_ordering(793) 00:09:00.310 fused_ordering(794) 00:09:00.310 fused_ordering(795) 00:09:00.310 fused_ordering(796) 00:09:00.310 fused_ordering(797) 00:09:00.310 fused_ordering(798) 00:09:00.310 fused_ordering(799) 00:09:00.310 fused_ordering(800) 00:09:00.310 fused_ordering(801) 00:09:00.310 fused_ordering(802) 00:09:00.310 fused_ordering(803) 00:09:00.310 fused_ordering(804) 00:09:00.310 fused_ordering(805) 00:09:00.310 fused_ordering(806) 00:09:00.311 fused_ordering(807) 00:09:00.311 fused_ordering(808) 00:09:00.311 fused_ordering(809) 00:09:00.311 fused_ordering(810) 00:09:00.311 fused_ordering(811) 00:09:00.311 fused_ordering(812) 00:09:00.311 fused_ordering(813) 00:09:00.311 fused_ordering(814) 00:09:00.311 fused_ordering(815) 00:09:00.311 fused_ordering(816) 00:09:00.311 fused_ordering(817) 00:09:00.311 fused_ordering(818) 00:09:00.311 fused_ordering(819) 00:09:00.311 fused_ordering(820) 00:09:01.242 fused_ordering(821) 00:09:01.242 fused_ordering(822) 00:09:01.242 fused_ordering(823) 00:09:01.242 fused_ordering(824) 00:09:01.242 fused_ordering(825) 00:09:01.242 fused_ordering(826) 00:09:01.242 fused_ordering(827) 00:09:01.242 fused_ordering(828) 00:09:01.242 fused_ordering(829) 00:09:01.242 fused_ordering(830) 00:09:01.242 fused_ordering(831) 00:09:01.242 fused_ordering(832) 00:09:01.242 fused_ordering(833) 00:09:01.242 fused_ordering(834) 00:09:01.242 fused_ordering(835) 00:09:01.242 fused_ordering(836) 00:09:01.242 fused_ordering(837) 00:09:01.242 fused_ordering(838) 00:09:01.242 fused_ordering(839) 00:09:01.242 fused_ordering(840) 00:09:01.242 fused_ordering(841) 00:09:01.242 fused_ordering(842) 00:09:01.242 fused_ordering(843) 00:09:01.242 fused_ordering(844) 00:09:01.242 fused_ordering(845) 00:09:01.242 fused_ordering(846) 00:09:01.242 fused_ordering(847) 00:09:01.242 fused_ordering(848) 00:09:01.242 fused_ordering(849) 00:09:01.242 fused_ordering(850) 00:09:01.242 fused_ordering(851) 00:09:01.242 fused_ordering(852) 00:09:01.242 fused_ordering(853) 00:09:01.242 fused_ordering(854) 00:09:01.242 fused_ordering(855) 00:09:01.242 fused_ordering(856) 00:09:01.242 fused_ordering(857) 00:09:01.242 fused_ordering(858) 00:09:01.242 fused_ordering(859) 00:09:01.242 fused_ordering(860) 00:09:01.242 fused_ordering(861) 00:09:01.242 fused_ordering(862) 00:09:01.242 fused_ordering(863) 00:09:01.242 fused_ordering(864) 00:09:01.242 fused_ordering(865) 00:09:01.242 fused_ordering(866) 00:09:01.242 fused_ordering(867) 00:09:01.242 fused_ordering(868) 00:09:01.242 fused_ordering(869) 00:09:01.242 fused_ordering(870) 00:09:01.242 fused_ordering(871) 00:09:01.242 fused_ordering(872) 00:09:01.242 fused_ordering(873) 00:09:01.242 fused_ordering(874) 00:09:01.242 fused_ordering(875) 00:09:01.242 fused_ordering(876) 00:09:01.242 fused_ordering(877) 00:09:01.242 fused_ordering(878) 00:09:01.242 fused_ordering(879) 00:09:01.242 fused_ordering(880) 00:09:01.242 fused_ordering(881) 00:09:01.242 fused_ordering(882) 00:09:01.242 fused_ordering(883) 00:09:01.242 fused_ordering(884) 00:09:01.242 fused_ordering(885) 00:09:01.242 fused_ordering(886) 00:09:01.242 fused_ordering(887) 00:09:01.242 fused_ordering(888) 00:09:01.242 fused_ordering(889) 00:09:01.242 fused_ordering(890) 00:09:01.242 fused_ordering(891) 00:09:01.242 fused_ordering(892) 00:09:01.242 fused_ordering(893) 00:09:01.242 fused_ordering(894) 00:09:01.242 fused_ordering(895) 00:09:01.242 fused_ordering(896) 00:09:01.242 fused_ordering(897) 00:09:01.242 fused_ordering(898) 00:09:01.242 fused_ordering(899) 00:09:01.242 fused_ordering(900) 00:09:01.242 fused_ordering(901) 00:09:01.242 fused_ordering(902) 00:09:01.242 fused_ordering(903) 00:09:01.242 fused_ordering(904) 00:09:01.242 fused_ordering(905) 00:09:01.242 fused_ordering(906) 00:09:01.242 fused_ordering(907) 00:09:01.242 fused_ordering(908) 00:09:01.242 fused_ordering(909) 00:09:01.242 fused_ordering(910) 00:09:01.242 fused_ordering(911) 00:09:01.242 fused_ordering(912) 00:09:01.242 fused_ordering(913) 00:09:01.242 fused_ordering(914) 00:09:01.242 fused_ordering(915) 00:09:01.242 fused_ordering(916) 00:09:01.242 fused_ordering(917) 00:09:01.242 fused_ordering(918) 00:09:01.242 fused_ordering(919) 00:09:01.242 fused_ordering(920) 00:09:01.242 fused_ordering(921) 00:09:01.242 fused_ordering(922) 00:09:01.242 fused_ordering(923) 00:09:01.242 fused_ordering(924) 00:09:01.242 fused_ordering(925) 00:09:01.242 fused_ordering(926) 00:09:01.242 fused_ordering(927) 00:09:01.242 fused_ordering(928) 00:09:01.242 fused_ordering(929) 00:09:01.242 fused_ordering(930) 00:09:01.242 fused_ordering(931) 00:09:01.242 fused_ordering(932) 00:09:01.242 fused_ordering(933) 00:09:01.242 fused_ordering(934) 00:09:01.242 fused_ordering(935) 00:09:01.242 fused_ordering(936) 00:09:01.242 fused_ordering(937) 00:09:01.242 fused_ordering(938) 00:09:01.242 fused_ordering(939) 00:09:01.242 fused_ordering(940) 00:09:01.242 fused_ordering(941) 00:09:01.242 fused_ordering(942) 00:09:01.242 fused_ordering(943) 00:09:01.242 fused_ordering(944) 00:09:01.242 fused_ordering(945) 00:09:01.242 fused_ordering(946) 00:09:01.242 fused_ordering(947) 00:09:01.242 fused_ordering(948) 00:09:01.242 fused_ordering(949) 00:09:01.242 fused_ordering(950) 00:09:01.242 fused_ordering(951) 00:09:01.242 fused_ordering(952) 00:09:01.242 fused_ordering(953) 00:09:01.242 fused_ordering(954) 00:09:01.242 fused_ordering(955) 00:09:01.242 fused_ordering(956) 00:09:01.242 fused_ordering(957) 00:09:01.242 fused_ordering(958) 00:09:01.242 fused_ordering(959) 00:09:01.242 fused_ordering(960) 00:09:01.242 fused_ordering(961) 00:09:01.242 fused_ordering(962) 00:09:01.242 fused_ordering(963) 00:09:01.242 fused_ordering(964) 00:09:01.242 fused_ordering(965) 00:09:01.242 fused_ordering(966) 00:09:01.242 fused_ordering(967) 00:09:01.242 fused_ordering(968) 00:09:01.242 fused_ordering(969) 00:09:01.242 fused_ordering(970) 00:09:01.242 fused_ordering(971) 00:09:01.242 fused_ordering(972) 00:09:01.242 fused_ordering(973) 00:09:01.242 fused_ordering(974) 00:09:01.242 fused_ordering(975) 00:09:01.242 fused_ordering(976) 00:09:01.242 fused_ordering(977) 00:09:01.242 fused_ordering(978) 00:09:01.242 fused_ordering(979) 00:09:01.242 fused_ordering(980) 00:09:01.242 fused_ordering(981) 00:09:01.242 fused_ordering(982) 00:09:01.242 fused_ordering(983) 00:09:01.242 fused_ordering(984) 00:09:01.242 fused_ordering(985) 00:09:01.242 fused_ordering(986) 00:09:01.242 fused_ordering(987) 00:09:01.243 fused_ordering(988) 00:09:01.243 fused_ordering(989) 00:09:01.243 fused_ordering(990) 00:09:01.243 fused_ordering(991) 00:09:01.243 fused_ordering(992) 00:09:01.243 fused_ordering(993) 00:09:01.243 fused_ordering(994) 00:09:01.243 fused_ordering(995) 00:09:01.243 fused_ordering(996) 00:09:01.243 fused_ordering(997) 00:09:01.243 fused_ordering(998) 00:09:01.243 fused_ordering(999) 00:09:01.243 fused_ordering(1000) 00:09:01.243 fused_ordering(1001) 00:09:01.243 fused_ordering(1002) 00:09:01.243 fused_ordering(1003) 00:09:01.243 fused_ordering(1004) 00:09:01.243 fused_ordering(1005) 00:09:01.243 fused_ordering(1006) 00:09:01.243 fused_ordering(1007) 00:09:01.243 fused_ordering(1008) 00:09:01.243 fused_ordering(1009) 00:09:01.243 fused_ordering(1010) 00:09:01.243 fused_ordering(1011) 00:09:01.243 fused_ordering(1012) 00:09:01.243 fused_ordering(1013) 00:09:01.243 fused_ordering(1014) 00:09:01.243 fused_ordering(1015) 00:09:01.243 fused_ordering(1016) 00:09:01.243 fused_ordering(1017) 00:09:01.243 fused_ordering(1018) 00:09:01.243 fused_ordering(1019) 00:09:01.243 fused_ordering(1020) 00:09:01.243 fused_ordering(1021) 00:09:01.243 fused_ordering(1022) 00:09:01.243 fused_ordering(1023) 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.243 rmmod nvme_tcp 00:09:01.243 rmmod nvme_fabrics 00:09:01.243 rmmod nvme_keyring 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2270482 ']' 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2270482 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2270482 ']' 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2270482 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2270482 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2270482' 00:09:01.243 killing process with pid 2270482 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2270482 00:09:01.243 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2270482 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.501 23:12:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.034 23:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.034 00:09:04.034 real 0m8.102s 00:09:04.034 user 0m5.443s 00:09:04.034 sys 0m3.902s 00:09:04.034 23:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.034 23:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:04.034 ************************************ 00:09:04.034 END TEST nvmf_fused_ordering 00:09:04.034 ************************************ 00:09:04.034 23:12:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.034 23:12:18 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:04.034 23:12:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.034 23:12:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.034 23:12:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.034 ************************************ 00:09:04.034 START TEST nvmf_delete_subsystem 00:09:04.034 ************************************ 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:04.034 * Looking for test storage... 00:09:04.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.034 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.035 23:12:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:05.935 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:05.935 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.935 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:05.936 Found net devices under 0000:84:00.0: cvl_0_0 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:05.936 Found net devices under 0000:84:00.1: cvl_0_1 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:09:05.936 00:09:05.936 --- 10.0.0.2 ping statistics --- 00:09:05.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.936 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:05.936 00:09:05.936 --- 10.0.0.1 ping statistics --- 00:09:05.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.936 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2272845 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2272845 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2272845 ']' 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.936 23:12:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.936 [2024-07-15 23:12:20.985512] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:05.936 [2024-07-15 23:12:20.985582] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.936 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.936 [2024-07-15 23:12:21.053018] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.936 [2024-07-15 23:12:21.177370] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.936 [2024-07-15 23:12:21.177440] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.936 [2024-07-15 23:12:21.177461] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.936 [2024-07-15 23:12:21.177475] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.936 [2024-07-15 23:12:21.177487] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.936 [2024-07-15 23:12:21.178763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.936 [2024-07-15 23:12:21.178775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 [2024-07-15 23:12:21.335309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 [2024-07-15 23:12:21.351538] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 NULL1 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 Delay0 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2272872 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:06.194 23:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:06.194 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.194 [2024-07-15 23:12:21.426225] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:08.086 23:12:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.086 23:12:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.086 23:12:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 [2024-07-15 23:12:23.638600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9970000c00 is same with the state(5) to be set 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 starting I/O failed: -6 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Write completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.343 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 starting I/O failed: -6 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 starting I/O failed: -6 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:08.344 Write completed with error (sct=0, sc=8) 00:09:08.344 Read completed with error (sct=0, sc=8) 00:09:09.711 [2024-07-15 23:12:24.606655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c7a70 is same with the state(5) to be set 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.711 Read completed with error (sct=0, sc=8) 00:09:09.711 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 [2024-07-15 23:12:24.639621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f997000d370 is same with the state(5) to be set 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 [2024-07-15 23:12:24.641343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c67a0 is same with the state(5) to be set 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 [2024-07-15 23:12:24.641598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c6390 is same with the state(5) to be set 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 Write completed with error (sct=0, sc=8) 00:09:09.712 Read completed with error (sct=0, sc=8) 00:09:09.712 [2024-07-15 23:12:24.641874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c6e40 is same with the state(5) to be set 00:09:09.712 23:12:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.712 Initializing NVMe Controllers 00:09:09.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.712 Controller IO queue size 128, less than required. 00:09:09.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:09.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:09.712 Initialization complete. Launching workers. 00:09:09.712 ======================================================== 00:09:09.712 Latency(us) 00:09:09.712 Device Information : IOPS MiB/s Average min max 00:09:09.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.00 0.09 966182.04 957.38 2003857.44 00:09:09.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.81 0.07 927900.24 656.07 2001978.18 00:09:09.712 ======================================================== 00:09:09.712 Total : 342.81 0.17 949229.46 656.07 2003857.44 00:09:09.712 00:09:09.712 23:12:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:09.712 [2024-07-15 23:12:24.642686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4c7a70 (9): Bad file descriptor 00:09:09.712 23:12:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2272872 00:09:09.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:09.712 23:12:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2272872 00:09:09.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2272872) - No such process 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2272872 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2272872 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2272872 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 [2024-07-15 23:12:25.159868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2273320 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:09.969 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.969 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.970 [2024-07-15 23:12:25.230328] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:10.533 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.533 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:10.533 23:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.096 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.096 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:11.096 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.660 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.660 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:11.660 23:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.918 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.918 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:11.918 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:12.483 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:12.483 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:12.483 23:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:13.047 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:13.047 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:13.047 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:13.304 Initializing NVMe Controllers 00:09:13.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.304 Controller IO queue size 128, less than required. 00:09:13.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:13.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:13.304 Initialization complete. Launching workers. 00:09:13.304 ======================================================== 00:09:13.304 Latency(us) 00:09:13.304 Device Information : IOPS MiB/s Average min max 00:09:13.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003976.56 1000198.13 1041040.15 00:09:13.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005247.12 1000278.69 1013559.90 00:09:13.304 ======================================================== 00:09:13.304 Total : 256.00 0.12 1004611.84 1000198.13 1041040.15 00:09:13.304 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2273320 00:09:13.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2273320) - No such process 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2273320 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.561 rmmod nvme_tcp 00:09:13.561 rmmod nvme_fabrics 00:09:13.561 rmmod nvme_keyring 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2272845 ']' 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2272845 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2272845 ']' 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2272845 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272845 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272845' 00:09:13.561 killing process with pid 2272845 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2272845 00:09:13.561 23:12:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2272845 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.820 23:12:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.350 23:12:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:16.350 00:09:16.350 real 0m12.271s 00:09:16.350 user 0m27.875s 00:09:16.350 sys 0m2.984s 00:09:16.350 23:12:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.350 23:12:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.350 ************************************ 00:09:16.350 END TEST nvmf_delete_subsystem 00:09:16.350 ************************************ 00:09:16.350 23:12:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:16.350 23:12:31 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:16.350 23:12:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:16.350 23:12:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.350 23:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.350 ************************************ 00:09:16.350 START TEST nvmf_ns_masking 00:09:16.350 ************************************ 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:16.350 * Looking for test storage... 00:09:16.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.350 23:12:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7b590c02-0ee8-464d-a271-69dbf55809f6 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=459f0f6c-1b68-42ec-a7ff-2cd044c30727 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=27421386-cd38-429b-98de-43e1f284e1cb 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.351 23:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:18.256 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:18.256 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.256 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:18.256 Found net devices under 0000:84:00.0: cvl_0_0 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:18.257 Found net devices under 0000:84:00.1: cvl_0_1 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:18.257 00:09:18.257 --- 10.0.0.2 ping statistics --- 00:09:18.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.257 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:18.257 00:09:18.257 --- 10.0.0.1 ping statistics --- 00:09:18.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.257 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2275756 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2275756 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2275756 ']' 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.257 23:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:18.257 [2024-07-15 23:12:33.419973] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:18.257 [2024-07-15 23:12:33.420053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.257 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.257 [2024-07-15 23:12:33.487211] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.520 [2024-07-15 23:12:33.603900] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.521 [2024-07-15 23:12:33.603949] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.521 [2024-07-15 23:12:33.603979] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.521 [2024-07-15 23:12:33.603991] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.521 [2024-07-15 23:12:33.604001] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.521 [2024-07-15 23:12:33.604048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:19.451 [2024-07-15 23:12:34.701861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:19.451 23:12:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:20.016 Malloc1 00:09:20.016 23:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:20.016 Malloc2 00:09:20.273 23:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:20.273 23:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:20.530 23:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.787 [2024-07-15 23:12:36.039566] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.787 23:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:20.787 23:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27421386-cd38-429b-98de-43e1f284e1cb -a 10.0.0.2 -s 4420 -i 4 00:09:21.045 23:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.045 23:12:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.045 23:12:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.045 23:12:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.045 23:12:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:22.938 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:23.194 [ 0]:0x1 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c973c298dc04ef18212480a2736f994 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c973c298dc04ef18212480a2736f994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.194 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:23.450 [ 0]:0x1 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c973c298dc04ef18212480a2736f994 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c973c298dc04ef18212480a2736f994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:23.450 [ 1]:0x2 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:23.450 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.707 23:12:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.707 23:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:23.963 23:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:23.963 23:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27421386-cd38-429b-98de-43e1f284e1cb -a 10.0.0.2 -s 4420 -i 4 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:24.220 23:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:26.112 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:26.369 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:26.369 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:26.369 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.370 [ 0]:0x2 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.370 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:26.627 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:26.627 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.627 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.627 [ 0]:0x1 00:09:26.627 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.627 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c973c298dc04ef18212480a2736f994 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c973c298dc04ef18212480a2736f994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.885 [ 1]:0x2 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.885 23:12:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.885 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:26.885 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.885 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:27.143 [ 0]:0x2 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.143 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:27.400 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:27.400 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27421386-cd38-429b-98de-43e1f284e1cb -a 10.0.0.2 -s 4420 -i 4 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:27.658 23:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:30.181 23:12:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.181 [ 0]:0x1 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8c973c298dc04ef18212480a2736f994 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8c973c298dc04ef18212480a2736f994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.181 [ 1]:0x2 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.181 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.439 [ 0]:0x2 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:30.439 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:30.696 [2024-07-15 23:12:45.941526] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:30.696 request: 00:09:30.696 { 00:09:30.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.696 "nsid": 2, 00:09:30.696 "host": "nqn.2016-06.io.spdk:host1", 00:09:30.696 "method": "nvmf_ns_remove_host", 00:09:30.696 "req_id": 1 00:09:30.696 } 00:09:30.696 Got JSON-RPC error response 00:09:30.696 response: 00:09:30.696 { 00:09:30.696 "code": -32602, 00:09:30.696 "message": "Invalid parameters" 00:09:30.696 } 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.696 23:12:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.953 [ 0]:0x2 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48edffb839fc4c57b090fa10a86ee112 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48edffb839fc4c57b090fa10a86ee112 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2277393 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:30.953 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2277393 /var/tmp/host.sock 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2277393 ']' 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:30.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.954 23:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:31.211 [2024-07-15 23:12:46.274947] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:31.212 [2024-07-15 23:12:46.275046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277393 ] 00:09:31.212 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.212 [2024-07-15 23:12:46.339273] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.212 [2024-07-15 23:12:46.459957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.145 23:12:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.145 23:12:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:32.145 23:12:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.402 23:12:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.660 23:12:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7b590c02-0ee8-464d-a271-69dbf55809f6 00:09:32.660 23:12:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:32.660 23:12:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B590C020EE8464DA27169DBF55809F6 -i 00:09:32.918 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 459f0f6c-1b68-42ec-a7ff-2cd044c30727 00:09:32.918 23:12:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:32.918 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 459F0F6C1B6842ECA7FF2CD044C30727 -i 00:09:33.176 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.435 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:33.692 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:33.693 23:12:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:33.950 nvme0n1 00:09:33.950 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:33.950 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:34.515 nvme1n2 00:09:34.515 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:34.515 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:34.515 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:34.515 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:34.515 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:34.773 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:34.773 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:34.773 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:34.773 23:12:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:35.031 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7b590c02-0ee8-464d-a271-69dbf55809f6 == \7\b\5\9\0\c\0\2\-\0\e\e\8\-\4\6\4\d\-\a\2\7\1\-\6\9\d\b\f\5\5\8\0\9\f\6 ]] 00:09:35.031 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:35.031 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:35.032 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 459f0f6c-1b68-42ec-a7ff-2cd044c30727 == \4\5\9\f\0\f\6\c\-\1\b\6\8\-\4\2\e\c\-\a\7\f\f\-\2\c\d\0\4\4\c\3\0\7\2\7 ]] 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2277393 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2277393 ']' 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2277393 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2277393 00:09:35.289 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:35.290 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:35.290 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2277393' 00:09:35.290 killing process with pid 2277393 00:09:35.290 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2277393 00:09:35.290 23:12:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2277393 00:09:35.855 23:12:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.112 23:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:36.112 23:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:36.112 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.112 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.113 rmmod nvme_tcp 00:09:36.113 rmmod nvme_fabrics 00:09:36.113 rmmod nvme_keyring 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2275756 ']' 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2275756 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2275756 ']' 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2275756 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2275756 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2275756' 00:09:36.113 killing process with pid 2275756 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2275756 00:09:36.113 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2275756 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.371 23:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.903 23:12:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.903 00:09:38.903 real 0m22.504s 00:09:38.903 user 0m30.023s 00:09:38.903 sys 0m4.183s 00:09:38.903 23:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.903 23:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:38.903 ************************************ 00:09:38.903 END TEST nvmf_ns_masking 00:09:38.903 ************************************ 00:09:38.903 23:12:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:38.903 23:12:53 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:38.903 23:12:53 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:38.903 23:12:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.903 23:12:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.903 23:12:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.903 ************************************ 00:09:38.903 START TEST nvmf_nvme_cli 00:09:38.903 ************************************ 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:38.903 * Looking for test storage... 00:09:38.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.903 23:12:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:40.801 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:40.801 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:40.802 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:40.802 Found net devices under 0000:84:00.0: cvl_0_0 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:40.802 Found net devices under 0000:84:00.1: cvl_0_1 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:40.802 00:09:40.802 --- 10.0.0.2 ping statistics --- 00:09:40.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.802 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:40.802 00:09:40.802 --- 10.0.0.1 ping statistics --- 00:09:40.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.802 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.802 23:12:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2280034 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2280034 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2280034 ']' 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.802 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.802 [2024-07-15 23:12:56.076635] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:40.802 [2024-07-15 23:12:56.076716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.802 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.059 [2024-07-15 23:12:56.143111] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.059 [2024-07-15 23:12:56.256678] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.059 [2024-07-15 23:12:56.256736] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.059 [2024-07-15 23:12:56.256774] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.059 [2024-07-15 23:12:56.256786] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.059 [2024-07-15 23:12:56.256796] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.059 [2024-07-15 23:12:56.256882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.059 [2024-07-15 23:12:56.256947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.059 [2024-07-15 23:12:56.257013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.059 [2024-07-15 23:12:56.257016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 [2024-07-15 23:12:56.414679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 Malloc0 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 Malloc1 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 [2024-07-15 23:12:56.498595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:09:41.315 00:09:41.315 Discovery Log Number of Records 2, Generation counter 2 00:09:41.315 =====Discovery Log Entry 0====== 00:09:41.315 trtype: tcp 00:09:41.315 adrfam: ipv4 00:09:41.315 subtype: current discovery subsystem 00:09:41.315 treq: not required 00:09:41.315 portid: 0 00:09:41.315 trsvcid: 4420 00:09:41.315 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.315 traddr: 10.0.0.2 00:09:41.315 eflags: explicit discovery connections, duplicate discovery information 00:09:41.315 sectype: none 00:09:41.315 =====Discovery Log Entry 1====== 00:09:41.315 trtype: tcp 00:09:41.315 adrfam: ipv4 00:09:41.315 subtype: nvme subsystem 00:09:41.315 treq: not required 00:09:41.315 portid: 0 00:09:41.315 trsvcid: 4420 00:09:41.315 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:41.315 traddr: 10.0.0.2 00:09:41.315 eflags: none 00:09:41.315 sectype: none 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:41.315 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:41.570 23:12:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:42.203 23:12:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:44.096 /dev/nvme0n1 ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.096 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.097 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.097 rmmod nvme_tcp 00:09:44.097 rmmod nvme_fabrics 00:09:44.097 rmmod nvme_keyring 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2280034 ']' 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2280034 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2280034 ']' 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2280034 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2280034 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2280034' 00:09:44.354 killing process with pid 2280034 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2280034 00:09:44.354 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2280034 00:09:44.611 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.611 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.611 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.612 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.612 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.612 23:12:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.612 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.612 23:12:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.141 23:13:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.141 00:09:47.141 real 0m8.140s 00:09:47.141 user 0m14.430s 00:09:47.141 sys 0m2.256s 00:09:47.141 23:13:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.141 23:13:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:47.141 ************************************ 00:09:47.141 END TEST nvmf_nvme_cli 00:09:47.141 ************************************ 00:09:47.141 23:13:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.141 23:13:01 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:47.141 23:13:01 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:47.141 23:13:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.141 23:13:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.141 23:13:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.141 ************************************ 00:09:47.141 START TEST nvmf_vfio_user 00:09:47.141 ************************************ 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:47.141 * Looking for test storage... 00:09:47.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.141 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2280844 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2280844' 00:09:47.142 Process pid: 2280844 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2280844 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2280844 ']' 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.142 23:13:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:47.142 [2024-07-15 23:13:02.003191] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:47.142 [2024-07-15 23:13:02.003287] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.142 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.142 [2024-07-15 23:13:02.070346] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.142 [2024-07-15 23:13:02.174770] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.142 [2024-07-15 23:13:02.174832] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.142 [2024-07-15 23:13:02.174847] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.142 [2024-07-15 23:13:02.174861] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.142 [2024-07-15 23:13:02.174872] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.142 [2024-07-15 23:13:02.174955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.142 [2024-07-15 23:13:02.175010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.142 [2024-07-15 23:13:02.175075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.142 [2024-07-15 23:13:02.175078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.705 23:13:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.705 23:13:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:47.705 23:13:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:49.073 23:13:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:49.073 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:49.073 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:49.073 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:49.073 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:49.073 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:49.330 Malloc1 00:09:49.330 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:49.587 23:13:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:49.843 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:50.100 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:50.100 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:50.100 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:50.357 Malloc2 00:09:50.357 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:50.614 23:13:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:50.870 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:51.129 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:51.129 [2024-07-15 23:13:06.353992] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:09:51.129 [2024-07-15 23:13:06.354040] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281392 ] 00:09:51.129 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.129 [2024-07-15 23:13:06.387144] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:51.129 [2024-07-15 23:13:06.396169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:51.129 [2024-07-15 23:13:06.396199] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f19b577e000 00:09:51.129 [2024-07-15 23:13:06.397163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.398159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.399170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.400173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.401177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.402179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.403185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.404189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:51.129 [2024-07-15 23:13:06.405195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:51.129 [2024-07-15 23:13:06.405215] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f19b5773000 00:09:51.129 [2024-07-15 23:13:06.406331] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:51.129 [2024-07-15 23:13:06.421968] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:51.129 [2024-07-15 23:13:06.422008] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:51.129 [2024-07-15 23:13:06.424321] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:51.129 [2024-07-15 23:13:06.424384] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:51.129 [2024-07-15 23:13:06.424482] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:51.129 [2024-07-15 23:13:06.424516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:51.129 [2024-07-15 23:13:06.424527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:51.129 [2024-07-15 23:13:06.425307] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:51.129 [2024-07-15 23:13:06.425333] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:51.129 [2024-07-15 23:13:06.425347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:51.129 [2024-07-15 23:13:06.426315] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:51.129 [2024-07-15 23:13:06.426335] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:51.129 [2024-07-15 23:13:06.426349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.427322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:51.129 [2024-07-15 23:13:06.427341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.428325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:51.129 [2024-07-15 23:13:06.428349] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:51.129 [2024-07-15 23:13:06.428359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.428370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.428480] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:51.129 [2024-07-15 23:13:06.428488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.428496] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:51.129 [2024-07-15 23:13:06.432749] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:51.129 [2024-07-15 23:13:06.433354] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:51.129 [2024-07-15 23:13:06.434359] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:51.129 [2024-07-15 23:13:06.435358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.129 [2024-07-15 23:13:06.435467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:51.129 [2024-07-15 23:13:06.436375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:51.129 [2024-07-15 23:13:06.436394] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:51.129 [2024-07-15 23:13:06.436403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:51.129 [2024-07-15 23:13:06.436440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436470] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:51.129 [2024-07-15 23:13:06.436479] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:51.129 [2024-07-15 23:13:06.436502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:51.129 [2024-07-15 23:13:06.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:51.129 [2024-07-15 23:13:06.436593] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:51.129 [2024-07-15 23:13:06.436601] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:51.129 [2024-07-15 23:13:06.436608] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:51.129 [2024-07-15 23:13:06.436616] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:51.129 [2024-07-15 23:13:06.436623] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:51.129 [2024-07-15 23:13:06.436635] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:51.129 [2024-07-15 23:13:06.436643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:51.129 [2024-07-15 23:13:06.436693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:51.129 [2024-07-15 23:13:06.436716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.129 [2024-07-15 23:13:06.436753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.129 [2024-07-15 23:13:06.436766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.129 [2024-07-15 23:13:06.436779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.129 [2024-07-15 23:13:06.436787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:51.129 [2024-07-15 23:13:06.436820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.436833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.436844] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:51.130 [2024-07-15 23:13:06.436853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.436868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.436880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.436893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.436908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.436978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.436995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437009] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:51.130 [2024-07-15 23:13:06.437018] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:51.130 [2024-07-15 23:13:06.437028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437082] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:51.130 [2024-07-15 23:13:06.437118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437146] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:51.130 [2024-07-15 23:13:06.437154] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:51.130 [2024-07-15 23:13:06.437164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437236] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:51.130 [2024-07-15 23:13:06.437244] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:51.130 [2024-07-15 23:13:06.437253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437348] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:51.130 [2024-07-15 23:13:06.437356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:51.130 [2024-07-15 23:13:06.437364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:51.130 [2024-07-15 23:13:06.437392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437527] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:51.130 [2024-07-15 23:13:06.437537] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:51.130 [2024-07-15 23:13:06.437543] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:51.130 [2024-07-15 23:13:06.437548] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:51.130 [2024-07-15 23:13:06.437558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:51.130 [2024-07-15 23:13:06.437569] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:51.130 [2024-07-15 23:13:06.437577] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:51.130 [2024-07-15 23:13:06.437586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437597] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:51.130 [2024-07-15 23:13:06.437605] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:51.130 [2024-07-15 23:13:06.437613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437625] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:51.130 [2024-07-15 23:13:06.437633] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:51.130 [2024-07-15 23:13:06.437641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:51.130 [2024-07-15 23:13:06.437653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:51.130 [2024-07-15 23:13:06.437703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:51.130 ===================================================== 00:09:51.130 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:51.130 ===================================================== 00:09:51.130 Controller Capabilities/Features 00:09:51.130 ================================ 00:09:51.130 Vendor ID: 4e58 00:09:51.130 Subsystem Vendor ID: 4e58 00:09:51.130 Serial Number: SPDK1 00:09:51.130 Model Number: SPDK bdev Controller 00:09:51.130 Firmware Version: 24.09 00:09:51.130 Recommended Arb Burst: 6 00:09:51.130 IEEE OUI Identifier: 8d 6b 50 00:09:51.130 Multi-path I/O 00:09:51.130 May have multiple subsystem ports: Yes 00:09:51.130 May have multiple controllers: Yes 00:09:51.130 Associated with SR-IOV VF: No 00:09:51.130 Max Data Transfer Size: 131072 00:09:51.130 Max Number of Namespaces: 32 00:09:51.130 Max Number of I/O Queues: 127 00:09:51.130 NVMe Specification Version (VS): 1.3 00:09:51.130 NVMe Specification Version (Identify): 1.3 00:09:51.130 Maximum Queue Entries: 256 00:09:51.130 Contiguous Queues Required: Yes 00:09:51.130 Arbitration Mechanisms Supported 00:09:51.130 Weighted Round Robin: Not Supported 00:09:51.130 Vendor Specific: Not Supported 00:09:51.130 Reset Timeout: 15000 ms 00:09:51.130 Doorbell Stride: 4 bytes 00:09:51.130 NVM Subsystem Reset: Not Supported 00:09:51.130 Command Sets Supported 00:09:51.130 NVM Command Set: Supported 00:09:51.130 Boot Partition: Not Supported 00:09:51.130 Memory Page Size Minimum: 4096 bytes 00:09:51.130 Memory Page Size Maximum: 4096 bytes 00:09:51.130 Persistent Memory Region: Not Supported 00:09:51.130 Optional Asynchronous Events Supported 00:09:51.130 Namespace Attribute Notices: Supported 00:09:51.130 Firmware Activation Notices: Not Supported 00:09:51.130 ANA Change Notices: Not Supported 00:09:51.130 PLE Aggregate Log Change Notices: Not Supported 00:09:51.130 LBA Status Info Alert Notices: Not Supported 00:09:51.130 EGE Aggregate Log Change Notices: Not Supported 00:09:51.131 Normal NVM Subsystem Shutdown event: Not Supported 00:09:51.131 Zone Descriptor Change Notices: Not Supported 00:09:51.131 Discovery Log Change Notices: Not Supported 00:09:51.131 Controller Attributes 00:09:51.131 128-bit Host Identifier: Supported 00:09:51.131 Non-Operational Permissive Mode: Not Supported 00:09:51.131 NVM Sets: Not Supported 00:09:51.131 Read Recovery Levels: Not Supported 00:09:51.131 Endurance Groups: Not Supported 00:09:51.131 Predictable Latency Mode: Not Supported 00:09:51.131 Traffic Based Keep ALive: Not Supported 00:09:51.131 Namespace Granularity: Not Supported 00:09:51.131 SQ Associations: Not Supported 00:09:51.131 UUID List: Not Supported 00:09:51.131 Multi-Domain Subsystem: Not Supported 00:09:51.131 Fixed Capacity Management: Not Supported 00:09:51.131 Variable Capacity Management: Not Supported 00:09:51.131 Delete Endurance Group: Not Supported 00:09:51.131 Delete NVM Set: Not Supported 00:09:51.131 Extended LBA Formats Supported: Not Supported 00:09:51.131 Flexible Data Placement Supported: Not Supported 00:09:51.131 00:09:51.131 Controller Memory Buffer Support 00:09:51.131 ================================ 00:09:51.131 Supported: No 00:09:51.131 00:09:51.131 Persistent Memory Region Support 00:09:51.131 ================================ 00:09:51.131 Supported: No 00:09:51.131 00:09:51.131 Admin Command Set Attributes 00:09:51.131 ============================ 00:09:51.131 Security Send/Receive: Not Supported 00:09:51.131 Format NVM: Not Supported 00:09:51.131 Firmware Activate/Download: Not Supported 00:09:51.131 Namespace Management: Not Supported 00:09:51.131 Device Self-Test: Not Supported 00:09:51.131 Directives: Not Supported 00:09:51.131 NVMe-MI: Not Supported 00:09:51.131 Virtualization Management: Not Supported 00:09:51.131 Doorbell Buffer Config: Not Supported 00:09:51.131 Get LBA Status Capability: Not Supported 00:09:51.131 Command & Feature Lockdown Capability: Not Supported 00:09:51.131 Abort Command Limit: 4 00:09:51.131 Async Event Request Limit: 4 00:09:51.131 Number of Firmware Slots: N/A 00:09:51.131 Firmware Slot 1 Read-Only: N/A 00:09:51.131 Firmware Activation Without Reset: N/A 00:09:51.131 Multiple Update Detection Support: N/A 00:09:51.131 Firmware Update Granularity: No Information Provided 00:09:51.131 Per-Namespace SMART Log: No 00:09:51.131 Asymmetric Namespace Access Log Page: Not Supported 00:09:51.131 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:51.131 Command Effects Log Page: Supported 00:09:51.131 Get Log Page Extended Data: Supported 00:09:51.131 Telemetry Log Pages: Not Supported 00:09:51.131 Persistent Event Log Pages: Not Supported 00:09:51.131 Supported Log Pages Log Page: May Support 00:09:51.131 Commands Supported & Effects Log Page: Not Supported 00:09:51.131 Feature Identifiers & Effects Log Page:May Support 00:09:51.131 NVMe-MI Commands & Effects Log Page: May Support 00:09:51.131 Data Area 4 for Telemetry Log: Not Supported 00:09:51.131 Error Log Page Entries Supported: 128 00:09:51.131 Keep Alive: Supported 00:09:51.131 Keep Alive Granularity: 10000 ms 00:09:51.131 00:09:51.131 NVM Command Set Attributes 00:09:51.131 ========================== 00:09:51.131 Submission Queue Entry Size 00:09:51.131 Max: 64 00:09:51.131 Min: 64 00:09:51.131 Completion Queue Entry Size 00:09:51.131 Max: 16 00:09:51.131 Min: 16 00:09:51.131 Number of Namespaces: 32 00:09:51.131 Compare Command: Supported 00:09:51.131 Write Uncorrectable Command: Not Supported 00:09:51.131 Dataset Management Command: Supported 00:09:51.131 Write Zeroes Command: Supported 00:09:51.131 Set Features Save Field: Not Supported 00:09:51.131 Reservations: Not Supported 00:09:51.131 Timestamp: Not Supported 00:09:51.131 Copy: Supported 00:09:51.131 Volatile Write Cache: Present 00:09:51.131 Atomic Write Unit (Normal): 1 00:09:51.131 Atomic Write Unit (PFail): 1 00:09:51.131 Atomic Compare & Write Unit: 1 00:09:51.131 Fused Compare & Write: Supported 00:09:51.131 Scatter-Gather List 00:09:51.131 SGL Command Set: Supported (Dword aligned) 00:09:51.131 SGL Keyed: Not Supported 00:09:51.131 SGL Bit Bucket Descriptor: Not Supported 00:09:51.131 SGL Metadata Pointer: Not Supported 00:09:51.131 Oversized SGL: Not Supported 00:09:51.131 SGL Metadata Address: Not Supported 00:09:51.131 SGL Offset: Not Supported 00:09:51.131 Transport SGL Data Block: Not Supported 00:09:51.131 Replay Protected Memory Block: Not Supported 00:09:51.131 00:09:51.131 Firmware Slot Information 00:09:51.131 ========================= 00:09:51.131 Active slot: 1 00:09:51.131 Slot 1 Firmware Revision: 24.09 00:09:51.131 00:09:51.131 00:09:51.131 Commands Supported and Effects 00:09:51.131 ============================== 00:09:51.131 Admin Commands 00:09:51.131 -------------- 00:09:51.131 Get Log Page (02h): Supported 00:09:51.131 Identify (06h): Supported 00:09:51.131 Abort (08h): Supported 00:09:51.131 Set Features (09h): Supported 00:09:51.131 Get Features (0Ah): Supported 00:09:51.131 Asynchronous Event Request (0Ch): Supported 00:09:51.131 Keep Alive (18h): Supported 00:09:51.131 I/O Commands 00:09:51.131 ------------ 00:09:51.131 Flush (00h): Supported LBA-Change 00:09:51.131 Write (01h): Supported LBA-Change 00:09:51.131 Read (02h): Supported 00:09:51.131 Compare (05h): Supported 00:09:51.131 Write Zeroes (08h): Supported LBA-Change 00:09:51.131 Dataset Management (09h): Supported LBA-Change 00:09:51.131 Copy (19h): Supported LBA-Change 00:09:51.131 00:09:51.131 Error Log 00:09:51.131 ========= 00:09:51.131 00:09:51.131 Arbitration 00:09:51.131 =========== 00:09:51.131 Arbitration Burst: 1 00:09:51.131 00:09:51.131 Power Management 00:09:51.131 ================ 00:09:51.131 Number of Power States: 1 00:09:51.131 Current Power State: Power State #0 00:09:51.131 Power State #0: 00:09:51.131 Max Power: 0.00 W 00:09:51.131 Non-Operational State: Operational 00:09:51.131 Entry Latency: Not Reported 00:09:51.131 Exit Latency: Not Reported 00:09:51.131 Relative Read Throughput: 0 00:09:51.131 Relative Read Latency: 0 00:09:51.131 Relative Write Throughput: 0 00:09:51.131 Relative Write Latency: 0 00:09:51.131 Idle Power: Not Reported 00:09:51.131 Active Power: Not Reported 00:09:51.131 Non-Operational Permissive Mode: Not Supported 00:09:51.131 00:09:51.131 Health Information 00:09:51.131 ================== 00:09:51.131 Critical Warnings: 00:09:51.131 Available Spare Space: OK 00:09:51.131 Temperature: OK 00:09:51.131 Device Reliability: OK 00:09:51.131 Read Only: No 00:09:51.131 Volatile Memory Backup: OK 00:09:51.131 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:51.131 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:51.131 Available Spare: 0% 00:09:51.131 Available Sp[2024-07-15 23:13:06.437975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:51.131 [2024-07-15 23:13:06.437993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:51.131 [2024-07-15 23:13:06.438041] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:51.131 [2024-07-15 23:13:06.438059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.131 [2024-07-15 23:13:06.438072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.131 [2024-07-15 23:13:06.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.131 [2024-07-15 23:13:06.438096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.131 [2024-07-15 23:13:06.438393] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:51.131 [2024-07-15 23:13:06.438415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:51.131 [2024-07-15 23:13:06.439395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:51.131 [2024-07-15 23:13:06.439489] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:51.131 [2024-07-15 23:13:06.439506] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:51.131 [2024-07-15 23:13:06.440404] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:51.131 [2024-07-15 23:13:06.440430] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:51.131 [2024-07-15 23:13:06.440496] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:51.388 [2024-07-15 23:13:06.442446] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:51.388 are Threshold: 0% 00:09:51.388 Life Percentage Used: 0% 00:09:51.388 Data Units Read: 0 00:09:51.388 Data Units Written: 0 00:09:51.388 Host Read Commands: 0 00:09:51.388 Host Write Commands: 0 00:09:51.388 Controller Busy Time: 0 minutes 00:09:51.388 Power Cycles: 0 00:09:51.388 Power On Hours: 0 hours 00:09:51.388 Unsafe Shutdowns: 0 00:09:51.388 Unrecoverable Media Errors: 0 00:09:51.388 Lifetime Error Log Entries: 0 00:09:51.388 Warning Temperature Time: 0 minutes 00:09:51.388 Critical Temperature Time: 0 minutes 00:09:51.388 00:09:51.388 Number of Queues 00:09:51.388 ================ 00:09:51.388 Number of I/O Submission Queues: 127 00:09:51.388 Number of I/O Completion Queues: 127 00:09:51.388 00:09:51.388 Active Namespaces 00:09:51.388 ================= 00:09:51.388 Namespace ID:1 00:09:51.388 Error Recovery Timeout: Unlimited 00:09:51.388 Command Set Identifier: NVM (00h) 00:09:51.388 Deallocate: Supported 00:09:51.388 Deallocated/Unwritten Error: Not Supported 00:09:51.388 Deallocated Read Value: Unknown 00:09:51.388 Deallocate in Write Zeroes: Not Supported 00:09:51.388 Deallocated Guard Field: 0xFFFF 00:09:51.388 Flush: Supported 00:09:51.388 Reservation: Supported 00:09:51.388 Namespace Sharing Capabilities: Multiple Controllers 00:09:51.388 Size (in LBAs): 131072 (0GiB) 00:09:51.388 Capacity (in LBAs): 131072 (0GiB) 00:09:51.388 Utilization (in LBAs): 131072 (0GiB) 00:09:51.389 NGUID: B63DC92A4A5041DDA86BB0570736C214 00:09:51.389 UUID: b63dc92a-4a50-41dd-a86b-b0570736c214 00:09:51.389 Thin Provisioning: Not Supported 00:09:51.389 Per-NS Atomic Units: Yes 00:09:51.389 Atomic Boundary Size (Normal): 0 00:09:51.389 Atomic Boundary Size (PFail): 0 00:09:51.389 Atomic Boundary Offset: 0 00:09:51.389 Maximum Single Source Range Length: 65535 00:09:51.389 Maximum Copy Length: 65535 00:09:51.389 Maximum Source Range Count: 1 00:09:51.389 NGUID/EUI64 Never Reused: No 00:09:51.389 Namespace Write Protected: No 00:09:51.389 Number of LBA Formats: 1 00:09:51.389 Current LBA Format: LBA Format #00 00:09:51.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:51.389 00:09:51.389 23:13:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:51.389 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.389 [2024-07-15 23:13:06.682585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:56.642 Initializing NVMe Controllers 00:09:56.642 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:56.642 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:56.642 Initialization complete. Launching workers. 00:09:56.642 ======================================================== 00:09:56.642 Latency(us) 00:09:56.642 Device Information : IOPS MiB/s Average min max 00:09:56.642 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34109.68 133.24 3751.55 1171.36 7548.94 00:09:56.642 ======================================================== 00:09:56.642 Total : 34109.68 133.24 3751.55 1171.36 7548.94 00:09:56.642 00:09:56.642 [2024-07-15 23:13:11.702494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:56.642 23:13:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:56.642 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.642 [2024-07-15 23:13:11.942670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.899 Initializing NVMe Controllers 00:10:01.899 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:01.899 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:01.899 Initialization complete. Launching workers. 00:10:01.899 ======================================================== 00:10:01.899 Latency(us) 00:10:01.899 Device Information : IOPS MiB/s Average min max 00:10:01.899 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7996.99 4966.88 15959.79 00:10:01.899 ======================================================== 00:10:01.899 Total : 16025.60 62.60 7996.99 4966.88 15959.79 00:10:01.899 00:10:01.899 [2024-07-15 23:13:16.979047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.899 23:13:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:01.899 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.899 [2024-07-15 23:13:17.193229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:07.158 [2024-07-15 23:13:22.254060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:07.159 Initializing NVMe Controllers 00:10:07.159 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:07.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:07.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:07.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:07.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:07.159 Initialization complete. Launching workers. 00:10:07.159 Starting thread on core 2 00:10:07.159 Starting thread on core 3 00:10:07.159 Starting thread on core 1 00:10:07.159 23:13:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:07.159 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.417 [2024-07-15 23:13:22.573313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:10.699 [2024-07-15 23:13:25.639127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:10.699 Initializing NVMe Controllers 00:10:10.699 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:10.699 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:10.699 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:10.699 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:10.699 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:10.699 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:10.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:10.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:10.699 Initialization complete. Launching workers. 00:10:10.699 Starting thread on core 1 with urgent priority queue 00:10:10.699 Starting thread on core 2 with urgent priority queue 00:10:10.699 Starting thread on core 3 with urgent priority queue 00:10:10.699 Starting thread on core 0 with urgent priority queue 00:10:10.699 SPDK bdev Controller (SPDK1 ) core 0: 5561.00 IO/s 17.98 secs/100000 ios 00:10:10.699 SPDK bdev Controller (SPDK1 ) core 1: 4810.33 IO/s 20.79 secs/100000 ios 00:10:10.699 SPDK bdev Controller (SPDK1 ) core 2: 5399.33 IO/s 18.52 secs/100000 ios 00:10:10.699 SPDK bdev Controller (SPDK1 ) core 3: 6073.00 IO/s 16.47 secs/100000 ios 00:10:10.699 ======================================================== 00:10:10.699 00:10:10.699 23:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:10.699 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.699 [2024-07-15 23:13:25.938110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:10.699 Initializing NVMe Controllers 00:10:10.699 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:10.699 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:10.699 Namespace ID: 1 size: 0GB 00:10:10.699 Initialization complete. 00:10:10.699 INFO: using host memory buffer for IO 00:10:10.699 Hello world! 00:10:10.699 [2024-07-15 23:13:25.972791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:10.956 23:13:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:10.956 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.956 [2024-07-15 23:13:26.265254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.328 Initializing NVMe Controllers 00:10:12.328 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.328 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.328 Initialization complete. Launching workers. 00:10:12.328 submit (in ns) avg, min, max = 9911.4, 3501.1, 4018894.4 00:10:12.328 complete (in ns) avg, min, max = 24224.7, 2070.0, 7008986.7 00:10:12.328 00:10:12.328 Submit histogram 00:10:12.328 ================ 00:10:12.328 Range in us Cumulative Count 00:10:12.328 3.484 - 3.508: 0.0155% ( 2) 00:10:12.328 3.508 - 3.532: 0.0233% ( 1) 00:10:12.328 3.532 - 3.556: 0.1555% ( 17) 00:10:12.328 3.556 - 3.579: 0.5131% ( 46) 00:10:12.328 3.579 - 3.603: 3.7313% ( 414) 00:10:12.328 3.603 - 3.627: 7.4549% ( 479) 00:10:12.328 3.627 - 3.650: 15.4229% ( 1025) 00:10:12.328 3.650 - 3.674: 24.0516% ( 1110) 00:10:12.328 3.674 - 3.698: 34.1884% ( 1304) 00:10:12.328 3.698 - 3.721: 43.4779% ( 1195) 00:10:12.328 3.721 - 3.745: 49.9611% ( 834) 00:10:12.328 3.745 - 3.769: 54.4776% ( 581) 00:10:12.328 3.769 - 3.793: 58.1779% ( 476) 00:10:12.328 3.793 - 3.816: 62.0647% ( 500) 00:10:12.328 3.816 - 3.840: 65.4151% ( 431) 00:10:12.328 3.840 - 3.864: 69.4108% ( 514) 00:10:12.328 3.864 - 3.887: 73.2276% ( 491) 00:10:12.328 3.887 - 3.911: 77.4720% ( 546) 00:10:12.328 3.911 - 3.935: 81.6775% ( 541) 00:10:12.328 3.935 - 3.959: 84.6859% ( 387) 00:10:12.328 3.959 - 3.982: 86.5438% ( 239) 00:10:12.328 3.982 - 4.006: 88.4873% ( 250) 00:10:12.328 4.006 - 4.030: 89.7932% ( 168) 00:10:12.328 4.030 - 4.053: 91.1147% ( 170) 00:10:12.328 4.053 - 4.077: 92.2497% ( 146) 00:10:12.328 4.077 - 4.101: 93.1281% ( 113) 00:10:12.328 4.101 - 4.124: 93.8977% ( 99) 00:10:12.328 4.124 - 4.148: 94.7528% ( 110) 00:10:12.328 4.148 - 4.172: 95.2425% ( 63) 00:10:12.328 4.172 - 4.196: 95.7167% ( 61) 00:10:12.328 4.196 - 4.219: 96.0432% ( 42) 00:10:12.328 4.219 - 4.243: 96.2998% ( 33) 00:10:12.328 4.243 - 4.267: 96.4086% ( 14) 00:10:12.328 4.267 - 4.290: 96.5096% ( 13) 00:10:12.328 4.290 - 4.314: 96.6107% ( 13) 00:10:12.328 4.314 - 4.338: 96.7351% ( 16) 00:10:12.328 4.338 - 4.361: 96.8361% ( 13) 00:10:12.329 4.361 - 4.385: 96.9061% ( 9) 00:10:12.329 4.385 - 4.409: 96.9838% ( 10) 00:10:12.329 4.409 - 4.433: 97.0149% ( 4) 00:10:12.329 4.433 - 4.456: 97.0538% ( 5) 00:10:12.329 4.456 - 4.480: 97.0771% ( 3) 00:10:12.329 4.480 - 4.504: 97.1238% ( 6) 00:10:12.329 4.504 - 4.527: 97.1315% ( 1) 00:10:12.329 4.527 - 4.551: 97.1393% ( 1) 00:10:12.329 4.551 - 4.575: 97.1549% ( 2) 00:10:12.329 4.575 - 4.599: 97.1859% ( 4) 00:10:12.329 4.646 - 4.670: 97.2248% ( 5) 00:10:12.329 4.670 - 4.693: 97.2404% ( 2) 00:10:12.329 4.693 - 4.717: 97.2637% ( 3) 00:10:12.329 4.717 - 4.741: 97.2948% ( 4) 00:10:12.329 4.741 - 4.764: 97.3414% ( 6) 00:10:12.329 4.764 - 4.788: 97.4036% ( 8) 00:10:12.329 4.788 - 4.812: 97.4658% ( 8) 00:10:12.329 4.812 - 4.836: 97.5047% ( 5) 00:10:12.329 4.836 - 4.859: 97.5591% ( 7) 00:10:12.329 4.859 - 4.883: 97.5979% ( 5) 00:10:12.329 4.883 - 4.907: 97.6679% ( 9) 00:10:12.329 4.907 - 4.930: 97.7379% ( 9) 00:10:12.329 4.930 - 4.954: 97.7534% ( 2) 00:10:12.329 4.954 - 4.978: 97.7923% ( 5) 00:10:12.329 4.978 - 5.001: 97.8001% ( 1) 00:10:12.329 5.001 - 5.025: 97.8156% ( 2) 00:10:12.329 5.049 - 5.073: 97.8623% ( 6) 00:10:12.329 5.073 - 5.096: 97.8700% ( 1) 00:10:12.329 5.167 - 5.191: 97.8778% ( 1) 00:10:12.329 5.215 - 5.239: 97.8856% ( 1) 00:10:12.329 5.239 - 5.262: 97.9011% ( 2) 00:10:12.329 5.262 - 5.286: 97.9244% ( 3) 00:10:12.329 5.310 - 5.333: 97.9322% ( 1) 00:10:12.329 5.333 - 5.357: 97.9400% ( 1) 00:10:12.329 5.570 - 5.594: 97.9478% ( 1) 00:10:12.329 5.736 - 5.760: 97.9633% ( 2) 00:10:12.329 5.807 - 5.831: 97.9711% ( 1) 00:10:12.329 5.950 - 5.973: 97.9789% ( 1) 00:10:12.329 5.997 - 6.021: 97.9866% ( 1) 00:10:12.329 6.021 - 6.044: 98.0022% ( 2) 00:10:12.329 6.044 - 6.068: 98.0100% ( 1) 00:10:12.329 6.068 - 6.116: 98.0177% ( 1) 00:10:12.329 6.116 - 6.163: 98.0255% ( 1) 00:10:12.329 6.163 - 6.210: 98.0410% ( 2) 00:10:12.329 6.637 - 6.684: 98.0566% ( 2) 00:10:12.329 6.779 - 6.827: 98.0644% ( 1) 00:10:12.329 6.827 - 6.874: 98.0799% ( 2) 00:10:12.329 6.969 - 7.016: 98.0877% ( 1) 00:10:12.329 7.159 - 7.206: 98.0955% ( 1) 00:10:12.329 7.206 - 7.253: 98.1032% ( 1) 00:10:12.329 7.253 - 7.301: 98.1188% ( 2) 00:10:12.329 7.301 - 7.348: 98.1266% ( 1) 00:10:12.329 7.348 - 7.396: 98.1421% ( 2) 00:10:12.329 7.396 - 7.443: 98.1654% ( 3) 00:10:12.329 7.443 - 7.490: 98.1732% ( 1) 00:10:12.329 7.490 - 7.538: 98.1810% ( 1) 00:10:12.329 7.585 - 7.633: 98.1965% ( 2) 00:10:12.329 7.633 - 7.680: 98.2043% ( 1) 00:10:12.329 7.727 - 7.775: 98.2276% ( 3) 00:10:12.329 7.775 - 7.822: 98.2432% ( 2) 00:10:12.329 7.822 - 7.870: 98.2509% ( 1) 00:10:12.329 7.870 - 7.917: 98.2587% ( 1) 00:10:12.329 7.964 - 8.012: 98.2665% ( 1) 00:10:12.329 8.012 - 8.059: 98.2743% ( 1) 00:10:12.329 8.107 - 8.154: 98.2820% ( 1) 00:10:12.329 8.249 - 8.296: 98.2898% ( 1) 00:10:12.329 8.296 - 8.344: 98.2976% ( 1) 00:10:12.329 8.391 - 8.439: 98.3053% ( 1) 00:10:12.329 8.439 - 8.486: 98.3131% ( 1) 00:10:12.329 8.533 - 8.581: 98.3287% ( 2) 00:10:12.329 8.581 - 8.628: 98.3442% ( 2) 00:10:12.329 8.676 - 8.723: 98.3598% ( 2) 00:10:12.329 8.865 - 8.913: 98.3675% ( 1) 00:10:12.329 9.102 - 9.150: 98.3753% ( 1) 00:10:12.329 9.150 - 9.197: 98.3831% ( 1) 00:10:12.329 9.244 - 9.292: 98.3909% ( 1) 00:10:12.329 9.339 - 9.387: 98.4064% ( 2) 00:10:12.329 9.434 - 9.481: 98.4142% ( 1) 00:10:12.329 9.529 - 9.576: 98.4220% ( 1) 00:10:12.329 9.671 - 9.719: 98.4375% ( 2) 00:10:12.329 9.766 - 9.813: 98.4530% ( 2) 00:10:12.329 9.861 - 9.908: 98.4686% ( 2) 00:10:12.329 10.145 - 10.193: 98.4841% ( 2) 00:10:12.329 10.193 - 10.240: 98.4919% ( 1) 00:10:12.329 10.335 - 10.382: 98.5075% ( 2) 00:10:12.329 10.477 - 10.524: 98.5152% ( 1) 00:10:12.329 10.572 - 10.619: 98.5230% ( 1) 00:10:12.329 10.619 - 10.667: 98.5308% ( 1) 00:10:12.329 10.809 - 10.856: 98.5463% ( 2) 00:10:12.329 10.856 - 10.904: 98.5541% ( 1) 00:10:12.329 10.904 - 10.951: 98.5619% ( 1) 00:10:12.329 11.188 - 11.236: 98.5697% ( 1) 00:10:12.329 11.236 - 11.283: 98.5774% ( 1) 00:10:12.329 11.330 - 11.378: 98.5852% ( 1) 00:10:12.329 11.425 - 11.473: 98.5930% ( 1) 00:10:12.329 11.662 - 11.710: 98.6007% ( 1) 00:10:12.329 11.710 - 11.757: 98.6085% ( 1) 00:10:12.329 12.041 - 12.089: 98.6163% ( 1) 00:10:12.329 12.136 - 12.231: 98.6241% ( 1) 00:10:12.329 12.231 - 12.326: 98.6318% ( 1) 00:10:12.329 12.516 - 12.610: 98.6396% ( 1) 00:10:12.329 12.990 - 13.084: 98.6474% ( 1) 00:10:12.329 13.179 - 13.274: 98.6552% ( 1) 00:10:12.329 13.369 - 13.464: 98.6707% ( 2) 00:10:12.329 13.464 - 13.559: 98.6785% ( 1) 00:10:12.329 13.843 - 13.938: 98.6940% ( 2) 00:10:12.329 14.127 - 14.222: 98.7018% ( 1) 00:10:12.329 14.222 - 14.317: 98.7096% ( 1) 00:10:12.329 14.317 - 14.412: 98.7174% ( 1) 00:10:12.329 14.507 - 14.601: 98.7251% ( 1) 00:10:12.329 14.981 - 15.076: 98.7407% ( 2) 00:10:12.329 15.076 - 15.170: 98.7484% ( 1) 00:10:12.329 17.067 - 17.161: 98.7562% ( 1) 00:10:12.329 17.161 - 17.256: 98.7795% ( 3) 00:10:12.329 17.256 - 17.351: 98.7951% ( 2) 00:10:12.329 17.351 - 17.446: 98.8106% ( 2) 00:10:12.329 17.446 - 17.541: 98.8650% ( 7) 00:10:12.329 17.541 - 17.636: 98.9039% ( 5) 00:10:12.329 17.636 - 17.730: 98.9661% ( 8) 00:10:12.329 17.730 - 17.825: 99.0127% ( 6) 00:10:12.329 17.825 - 17.920: 99.0438% ( 4) 00:10:12.329 17.920 - 18.015: 99.0983% ( 7) 00:10:12.329 18.015 - 18.110: 99.1449% ( 6) 00:10:12.329 18.110 - 18.204: 99.2693% ( 16) 00:10:12.329 18.204 - 18.299: 99.3237% ( 7) 00:10:12.329 18.299 - 18.394: 99.4092% ( 11) 00:10:12.329 18.394 - 18.489: 99.4558% ( 6) 00:10:12.329 18.489 - 18.584: 99.5569% ( 13) 00:10:12.329 18.584 - 18.679: 99.5958% ( 5) 00:10:12.329 18.679 - 18.773: 99.6657% ( 9) 00:10:12.329 18.773 - 18.868: 99.6891% ( 3) 00:10:12.329 18.868 - 18.963: 99.7046% ( 2) 00:10:12.329 18.963 - 19.058: 99.7357% ( 4) 00:10:12.329 19.058 - 19.153: 99.7435% ( 1) 00:10:12.329 19.153 - 19.247: 99.7823% ( 5) 00:10:12.329 19.247 - 19.342: 99.7901% ( 1) 00:10:12.329 19.437 - 19.532: 99.7979% ( 1) 00:10:12.329 19.627 - 19.721: 99.8057% ( 1) 00:10:12.329 19.721 - 19.816: 99.8134% ( 1) 00:10:12.329 22.281 - 22.376: 99.8212% ( 1) 00:10:12.329 23.324 - 23.419: 99.8290% ( 1) 00:10:12.329 23.419 - 23.514: 99.8368% ( 1) 00:10:12.329 24.841 - 25.031: 99.8445% ( 1) 00:10:12.329 32.806 - 32.996: 99.8523% ( 1) 00:10:12.329 3980.705 - 4004.978: 99.9689% ( 15) 00:10:12.329 4004.978 - 4029.250: 100.0000% ( 4) 00:10:12.329 00:10:12.329 Complete histogram 00:10:12.329 ================== 00:10:12.329 Range in us Cumulative Count 00:10:12.329 2.062 - 2.074: 0.0622% ( 8) 00:10:12.329 2.074 - 2.086: 9.5538% ( 1221) 00:10:12.329 2.086 - 2.098: 20.3825% ( 1393) 00:10:12.329 2.098 - 2.110: 23.7873% ( 438) 00:10:12.329 2.110 - 2.121: 48.7174% ( 3207) 00:10:12.329 2.121 - 2.133: 56.6465% ( 1020) 00:10:12.329 2.133 - 2.145: 61.0852% ( 571) 00:10:12.329 2.145 - 2.157: 67.4596% ( 820) 00:10:12.329 2.157 - 2.169: 69.4108% ( 251) 00:10:12.329 2.169 - 2.181: 71.7118% ( 296) 00:10:12.329 2.181 - 2.193: 78.7780% ( 909) 00:10:12.329 2.193 - 2.204: 81.0557% ( 293) 00:10:12.329 2.204 - 2.216: 82.5016% ( 186) 00:10:12.329 2.216 - 2.228: 84.7559% ( 290) 00:10:12.329 2.228 - 2.240: 85.8831% ( 145) 00:10:12.329 2.240 - 2.252: 87.6244% ( 224) 00:10:12.329 2.252 - 2.264: 91.1847% ( 458) 00:10:12.329 2.264 - 2.276: 92.5062% ( 170) 00:10:12.329 2.276 - 2.287: 93.6334% ( 145) 00:10:12.329 2.287 - 2.299: 94.3097% ( 87) 00:10:12.329 2.299 - 2.311: 94.5118% ( 26) 00:10:12.329 2.311 - 2.323: 94.9160% ( 52) 00:10:12.329 2.323 - 2.335: 95.2503% ( 43) 00:10:12.329 2.335 - 2.347: 95.4058% ( 20) 00:10:12.329 2.347 - 2.359: 95.6856% ( 36) 00:10:12.329 2.359 - 2.370: 95.8722% ( 24) 00:10:12.329 2.370 - 2.382: 95.9499% ( 10) 00:10:12.329 2.382 - 2.394: 96.0743% ( 16) 00:10:12.329 2.394 - 2.406: 96.2842% ( 27) 00:10:12.329 2.406 - 2.418: 96.4863% ( 26) 00:10:12.329 2.418 - 2.430: 96.7817% ( 38) 00:10:12.329 2.430 - 2.441: 97.0849% ( 39) 00:10:12.329 2.441 - 2.453: 97.2948% ( 27) 00:10:12.329 2.453 - 2.465: 97.5280% ( 30) 00:10:12.329 2.465 - 2.477: 97.7301% ( 26) 00:10:12.329 2.477 - 2.489: 97.8778% ( 19) 00:10:12.329 2.489 - 2.501: 98.0566% ( 23) 00:10:12.329 2.501 - 2.513: 98.1732% ( 15) 00:10:12.329 2.513 - 2.524: 98.3131% ( 18) 00:10:12.329 2.524 - 2.536: 98.4297% ( 15) 00:10:12.329 2.536 - 2.548: 98.4841% ( 7) 00:10:12.329 2.548 - 2.560: 98.5386% ( 7) 00:10:12.329 2.560 - 2.572: 98.5930% ( 7) 00:10:12.329 2.584 - 2.596: 98.6163% ( 3) 00:10:12.329 2.596 - 2.607: 98.6318% ( 2) 00:10:12.329 2.607 - 2.619: 98.6474% ( 2) 00:10:12.329 2.631 - 2.643: 98.6552% ( 1) 00:10:12.329 2.655 - 2.667: 98.6629% ( 1) 00:10:12.329 2.667 - 2.679: 98.6707% ( 1) 00:10:12.329 2.726 - 2.738: 98.6785% ( 1) 00:10:12.330 3.366 - 3.390: 98.6863% ( 1) 00:10:12.330 3.413 - 3.437: 98.6940% ( 1) 00:10:12.330 3.461 - 3.484: 98.7018% ( 1) 00:10:12.330 3.484 - 3.508: 98.7174% ( 2) 00:10:12.330 3.532 - 3.556: 98.7251% ( 1) 00:10:12.330 3.556 - 3.579: 98.7562% ( 4) 00:10:12.330 3.579 - 3.603: 98.7640% ( 1) 00:10:12.330 3.603 - 3.627: 98.7873% ( 3) 00:10:12.330 3.627 - 3.650: 98.7951% ( 1) 00:10:12.330 3.698 - 3.721: 98.8184% ( 3) 00:10:12.330 3.721 - 3.745: 98.8262% ( 1) 00:10:12.330 3.793 - 3.816: 98.8495% ( 3) 00:10:12.330 3.816 - 3.840: 98.8573% ( 1) 00:10:12.330 3.840 - 3.864: 98.8650% ( 1) 00:10:12.330 3.935 - 3.959: 98.8806% ( 2) 00:10:12.330 3.982 - 4.006: 98.8884% ( 1) 00:10:12.330 4.077 - 4.101: 98.8961% ( 1) 00:10:12.330 4.124 - 4.148: 98.9039% ( 1) 00:10:12.330 5.499 - 5.523: 9[2024-07-15 23:13:27.284684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.330 8.9117% ( 1) 00:10:12.330 5.523 - 5.547: 98.9272% ( 2) 00:10:12.330 5.950 - 5.973: 98.9350% ( 1) 00:10:12.330 6.116 - 6.163: 98.9428% ( 1) 00:10:12.330 6.163 - 6.210: 98.9506% ( 1) 00:10:12.330 6.210 - 6.258: 98.9583% ( 1) 00:10:12.330 6.400 - 6.447: 98.9661% ( 1) 00:10:12.330 6.684 - 6.732: 98.9739% ( 1) 00:10:12.330 8.201 - 8.249: 98.9817% ( 1) 00:10:12.330 9.102 - 9.150: 98.9894% ( 1) 00:10:12.330 15.834 - 15.929: 99.0127% ( 3) 00:10:12.330 15.929 - 16.024: 99.0361% ( 3) 00:10:12.330 16.024 - 16.119: 99.0672% ( 4) 00:10:12.330 16.119 - 16.213: 99.0827% ( 2) 00:10:12.330 16.213 - 16.308: 99.1138% ( 4) 00:10:12.330 16.308 - 16.403: 99.1294% ( 2) 00:10:12.330 16.403 - 16.498: 99.1760% ( 6) 00:10:12.330 16.498 - 16.593: 99.1993% ( 3) 00:10:12.330 16.593 - 16.687: 99.2615% ( 8) 00:10:12.330 16.687 - 16.782: 99.3004% ( 5) 00:10:12.330 16.782 - 16.877: 99.3315% ( 4) 00:10:12.330 16.877 - 16.972: 99.3548% ( 3) 00:10:12.330 16.972 - 17.067: 99.3781% ( 3) 00:10:12.330 17.067 - 17.161: 99.3937% ( 2) 00:10:12.330 17.161 - 17.256: 99.4170% ( 3) 00:10:12.330 17.446 - 17.541: 99.4248% ( 1) 00:10:12.330 17.541 - 17.636: 99.4403% ( 2) 00:10:12.330 18.204 - 18.299: 99.4481% ( 1) 00:10:12.330 19.437 - 19.532: 99.4558% ( 1) 00:10:12.330 3980.705 - 4004.978: 99.9145% ( 59) 00:10:12.330 4004.978 - 4029.250: 99.9922% ( 10) 00:10:12.330 6990.507 - 7039.052: 100.0000% ( 1) 00:10:12.330 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:12.330 [ 00:10:12.330 { 00:10:12.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:12.330 "subtype": "Discovery", 00:10:12.330 "listen_addresses": [], 00:10:12.330 "allow_any_host": true, 00:10:12.330 "hosts": [] 00:10:12.330 }, 00:10:12.330 { 00:10:12.330 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:12.330 "subtype": "NVMe", 00:10:12.330 "listen_addresses": [ 00:10:12.330 { 00:10:12.330 "trtype": "VFIOUSER", 00:10:12.330 "adrfam": "IPv4", 00:10:12.330 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:12.330 "trsvcid": "0" 00:10:12.330 } 00:10:12.330 ], 00:10:12.330 "allow_any_host": true, 00:10:12.330 "hosts": [], 00:10:12.330 "serial_number": "SPDK1", 00:10:12.330 "model_number": "SPDK bdev Controller", 00:10:12.330 "max_namespaces": 32, 00:10:12.330 "min_cntlid": 1, 00:10:12.330 "max_cntlid": 65519, 00:10:12.330 "namespaces": [ 00:10:12.330 { 00:10:12.330 "nsid": 1, 00:10:12.330 "bdev_name": "Malloc1", 00:10:12.330 "name": "Malloc1", 00:10:12.330 "nguid": "B63DC92A4A5041DDA86BB0570736C214", 00:10:12.330 "uuid": "b63dc92a-4a50-41dd-a86b-b0570736c214" 00:10:12.330 } 00:10:12.330 ] 00:10:12.330 }, 00:10:12.330 { 00:10:12.330 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:12.330 "subtype": "NVMe", 00:10:12.330 "listen_addresses": [ 00:10:12.330 { 00:10:12.330 "trtype": "VFIOUSER", 00:10:12.330 "adrfam": "IPv4", 00:10:12.330 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:12.330 "trsvcid": "0" 00:10:12.330 } 00:10:12.330 ], 00:10:12.330 "allow_any_host": true, 00:10:12.330 "hosts": [], 00:10:12.330 "serial_number": "SPDK2", 00:10:12.330 "model_number": "SPDK bdev Controller", 00:10:12.330 "max_namespaces": 32, 00:10:12.330 "min_cntlid": 1, 00:10:12.330 "max_cntlid": 65519, 00:10:12.330 "namespaces": [ 00:10:12.330 { 00:10:12.330 "nsid": 1, 00:10:12.330 "bdev_name": "Malloc2", 00:10:12.330 "name": "Malloc2", 00:10:12.330 "nguid": "EFDBFD6DF7624F7889250248601FF806", 00:10:12.330 "uuid": "efdbfd6d-f762-4f78-8925-0248601ff806" 00:10:12.330 } 00:10:12.330 ] 00:10:12.330 } 00:10:12.330 ] 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2283918 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:12.330 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:12.588 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.588 [2024-07-15 23:13:27.753285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.588 Malloc3 00:10:12.588 23:13:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:12.845 [2024-07-15 23:13:28.097981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.845 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:12.845 Asynchronous Event Request test 00:10:12.845 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.845 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.845 Registering asynchronous event callbacks... 00:10:12.845 Starting namespace attribute notice tests for all controllers... 00:10:12.845 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:12.845 aer_cb - Changed Namespace 00:10:12.845 Cleaning up... 00:10:13.101 [ 00:10:13.101 { 00:10:13.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:13.101 "subtype": "Discovery", 00:10:13.101 "listen_addresses": [], 00:10:13.101 "allow_any_host": true, 00:10:13.101 "hosts": [] 00:10:13.101 }, 00:10:13.101 { 00:10:13.101 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:13.101 "subtype": "NVMe", 00:10:13.101 "listen_addresses": [ 00:10:13.101 { 00:10:13.101 "trtype": "VFIOUSER", 00:10:13.101 "adrfam": "IPv4", 00:10:13.102 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:13.102 "trsvcid": "0" 00:10:13.102 } 00:10:13.102 ], 00:10:13.102 "allow_any_host": true, 00:10:13.102 "hosts": [], 00:10:13.102 "serial_number": "SPDK1", 00:10:13.102 "model_number": "SPDK bdev Controller", 00:10:13.102 "max_namespaces": 32, 00:10:13.102 "min_cntlid": 1, 00:10:13.102 "max_cntlid": 65519, 00:10:13.102 "namespaces": [ 00:10:13.102 { 00:10:13.102 "nsid": 1, 00:10:13.102 "bdev_name": "Malloc1", 00:10:13.102 "name": "Malloc1", 00:10:13.102 "nguid": "B63DC92A4A5041DDA86BB0570736C214", 00:10:13.102 "uuid": "b63dc92a-4a50-41dd-a86b-b0570736c214" 00:10:13.102 }, 00:10:13.102 { 00:10:13.102 "nsid": 2, 00:10:13.102 "bdev_name": "Malloc3", 00:10:13.102 "name": "Malloc3", 00:10:13.102 "nguid": "1C936976F15F4C1C8BCE6D91232FABF2", 00:10:13.102 "uuid": "1c936976-f15f-4c1c-8bce-6d91232fabf2" 00:10:13.102 } 00:10:13.102 ] 00:10:13.102 }, 00:10:13.102 { 00:10:13.102 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:13.102 "subtype": "NVMe", 00:10:13.102 "listen_addresses": [ 00:10:13.102 { 00:10:13.102 "trtype": "VFIOUSER", 00:10:13.102 "adrfam": "IPv4", 00:10:13.102 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:13.102 "trsvcid": "0" 00:10:13.102 } 00:10:13.102 ], 00:10:13.102 "allow_any_host": true, 00:10:13.102 "hosts": [], 00:10:13.102 "serial_number": "SPDK2", 00:10:13.102 "model_number": "SPDK bdev Controller", 00:10:13.102 "max_namespaces": 32, 00:10:13.102 "min_cntlid": 1, 00:10:13.102 "max_cntlid": 65519, 00:10:13.102 "namespaces": [ 00:10:13.102 { 00:10:13.102 "nsid": 1, 00:10:13.102 "bdev_name": "Malloc2", 00:10:13.102 "name": "Malloc2", 00:10:13.102 "nguid": "EFDBFD6DF7624F7889250248601FF806", 00:10:13.102 "uuid": "efdbfd6d-f762-4f78-8925-0248601ff806" 00:10:13.102 } 00:10:13.102 ] 00:10:13.102 } 00:10:13.102 ] 00:10:13.102 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2283918 00:10:13.102 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:13.102 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:13.102 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:13.102 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:13.102 [2024-07-15 23:13:28.372167] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:10:13.102 [2024-07-15 23:13:28.372212] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283930 ] 00:10:13.102 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.102 [2024-07-15 23:13:28.404879] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:13.102 [2024-07-15 23:13:28.410274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.102 [2024-07-15 23:13:28.410304] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f41da0e4000 00:10:13.102 [2024-07-15 23:13:28.411273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.102 [2024-07-15 23:13:28.412277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.102 [2024-07-15 23:13:28.413287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.102 [2024-07-15 23:13:28.414293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.102 [2024-07-15 23:13:28.415303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.102 [2024-07-15 23:13:28.416325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.360 [2024-07-15 23:13:28.417330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.360 [2024-07-15 23:13:28.418356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.360 [2024-07-15 23:13:28.419369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.360 [2024-07-15 23:13:28.419391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f41da0d9000 00:10:13.360 [2024-07-15 23:13:28.420557] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.360 [2024-07-15 23:13:28.435318] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:13.360 [2024-07-15 23:13:28.435353] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:13.360 [2024-07-15 23:13:28.437452] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:13.360 [2024-07-15 23:13:28.437507] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:13.360 [2024-07-15 23:13:28.437596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:13.360 [2024-07-15 23:13:28.437622] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:13.360 [2024-07-15 23:13:28.437633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:13.361 [2024-07-15 23:13:28.438457] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:13.361 [2024-07-15 23:13:28.438483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:13.361 [2024-07-15 23:13:28.438496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:13.361 [2024-07-15 23:13:28.439460] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:13.361 [2024-07-15 23:13:28.439481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:13.361 [2024-07-15 23:13:28.439495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.440464] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:13.361 [2024-07-15 23:13:28.440485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.441490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:13.361 [2024-07-15 23:13:28.441511] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:13.361 [2024-07-15 23:13:28.441520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.441532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.441642] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:13.361 [2024-07-15 23:13:28.441650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.441658] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:13.361 [2024-07-15 23:13:28.442482] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:13.361 [2024-07-15 23:13:28.443488] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:13.361 [2024-07-15 23:13:28.444492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:13.361 [2024-07-15 23:13:28.445485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:13.361 [2024-07-15 23:13:28.445552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:13.361 [2024-07-15 23:13:28.449748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:13.361 [2024-07-15 23:13:28.449769] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:13.361 [2024-07-15 23:13:28.449779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.449819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:13.361 [2024-07-15 23:13:28.449836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.449861] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.361 [2024-07-15 23:13:28.449872] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.361 [2024-07-15 23:13:28.449891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.457752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.457777] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:13.361 [2024-07-15 23:13:28.457802] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:13.361 [2024-07-15 23:13:28.457810] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:13.361 [2024-07-15 23:13:28.457818] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:13.361 [2024-07-15 23:13:28.457830] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:13.361 [2024-07-15 23:13:28.457840] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:13.361 [2024-07-15 23:13:28.457848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.457863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.457884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.465750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.465779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.361 [2024-07-15 23:13:28.465793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.361 [2024-07-15 23:13:28.465804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.361 [2024-07-15 23:13:28.465816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.361 [2024-07-15 23:13:28.465824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.465839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.465853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.473750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.473769] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:13.361 [2024-07-15 23:13:28.473778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.473794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.473806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.473820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.481761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.481837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.481854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.481867] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:13.361 [2024-07-15 23:13:28.481876] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:13.361 [2024-07-15 23:13:28.481886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.489764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.489789] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:13.361 [2024-07-15 23:13:28.489805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.489820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.489833] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.361 [2024-07-15 23:13:28.489841] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.361 [2024-07-15 23:13:28.489851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.497763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.497795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.497811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.497825] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.361 [2024-07-15 23:13:28.497833] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.361 [2024-07-15 23:13:28.497843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.505765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.505785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505852] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:13.361 [2024-07-15 23:13:28.505859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:13.361 [2024-07-15 23:13:28.505868] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:13.361 [2024-07-15 23:13:28.505895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.513749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:13.361 [2024-07-15 23:13:28.513779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:13.361 [2024-07-15 23:13:28.521751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.521776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:13.362 [2024-07-15 23:13:28.529753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.529779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.362 [2024-07-15 23:13:28.537763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.537807] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:13.362 [2024-07-15 23:13:28.537818] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:13.362 [2024-07-15 23:13:28.537825] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:13.362 [2024-07-15 23:13:28.537831] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:13.362 [2024-07-15 23:13:28.537840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:13.362 [2024-07-15 23:13:28.537852] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:13.362 [2024-07-15 23:13:28.537860] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:13.362 [2024-07-15 23:13:28.537869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:13.362 [2024-07-15 23:13:28.537881] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:13.362 [2024-07-15 23:13:28.537889] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.362 [2024-07-15 23:13:28.537897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.362 [2024-07-15 23:13:28.537910] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:13.362 [2024-07-15 23:13:28.537918] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:13.362 [2024-07-15 23:13:28.537926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:13.362 [2024-07-15 23:13:28.545766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.545805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:13.362 [2024-07-15 23:13:28.545836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:13.362 ===================================================== 00:10:13.362 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.362 ===================================================== 00:10:13.362 Controller Capabilities/Features 00:10:13.362 ================================ 00:10:13.362 Vendor ID: 4e58 00:10:13.362 Subsystem Vendor ID: 4e58 00:10:13.362 Serial Number: SPDK2 00:10:13.362 Model Number: SPDK bdev Controller 00:10:13.362 Firmware Version: 24.09 00:10:13.362 Recommended Arb Burst: 6 00:10:13.362 IEEE OUI Identifier: 8d 6b 50 00:10:13.362 Multi-path I/O 00:10:13.362 May have multiple subsystem ports: Yes 00:10:13.362 May have multiple controllers: Yes 00:10:13.362 Associated with SR-IOV VF: No 00:10:13.362 Max Data Transfer Size: 131072 00:10:13.362 Max Number of Namespaces: 32 00:10:13.362 Max Number of I/O Queues: 127 00:10:13.362 NVMe Specification Version (VS): 1.3 00:10:13.362 NVMe Specification Version (Identify): 1.3 00:10:13.362 Maximum Queue Entries: 256 00:10:13.362 Contiguous Queues Required: Yes 00:10:13.362 Arbitration Mechanisms Supported 00:10:13.362 Weighted Round Robin: Not Supported 00:10:13.362 Vendor Specific: Not Supported 00:10:13.362 Reset Timeout: 15000 ms 00:10:13.362 Doorbell Stride: 4 bytes 00:10:13.362 NVM Subsystem Reset: Not Supported 00:10:13.362 Command Sets Supported 00:10:13.362 NVM Command Set: Supported 00:10:13.362 Boot Partition: Not Supported 00:10:13.362 Memory Page Size Minimum: 4096 bytes 00:10:13.362 Memory Page Size Maximum: 4096 bytes 00:10:13.362 Persistent Memory Region: Not Supported 00:10:13.362 Optional Asynchronous Events Supported 00:10:13.362 Namespace Attribute Notices: Supported 00:10:13.362 Firmware Activation Notices: Not Supported 00:10:13.362 ANA Change Notices: Not Supported 00:10:13.362 PLE Aggregate Log Change Notices: Not Supported 00:10:13.362 LBA Status Info Alert Notices: Not Supported 00:10:13.362 EGE Aggregate Log Change Notices: Not Supported 00:10:13.362 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.362 Zone Descriptor Change Notices: Not Supported 00:10:13.362 Discovery Log Change Notices: Not Supported 00:10:13.362 Controller Attributes 00:10:13.362 128-bit Host Identifier: Supported 00:10:13.362 Non-Operational Permissive Mode: Not Supported 00:10:13.362 NVM Sets: Not Supported 00:10:13.362 Read Recovery Levels: Not Supported 00:10:13.362 Endurance Groups: Not Supported 00:10:13.362 Predictable Latency Mode: Not Supported 00:10:13.362 Traffic Based Keep ALive: Not Supported 00:10:13.362 Namespace Granularity: Not Supported 00:10:13.362 SQ Associations: Not Supported 00:10:13.362 UUID List: Not Supported 00:10:13.362 Multi-Domain Subsystem: Not Supported 00:10:13.362 Fixed Capacity Management: Not Supported 00:10:13.362 Variable Capacity Management: Not Supported 00:10:13.362 Delete Endurance Group: Not Supported 00:10:13.362 Delete NVM Set: Not Supported 00:10:13.362 Extended LBA Formats Supported: Not Supported 00:10:13.362 Flexible Data Placement Supported: Not Supported 00:10:13.362 00:10:13.362 Controller Memory Buffer Support 00:10:13.362 ================================ 00:10:13.362 Supported: No 00:10:13.362 00:10:13.362 Persistent Memory Region Support 00:10:13.362 ================================ 00:10:13.362 Supported: No 00:10:13.362 00:10:13.362 Admin Command Set Attributes 00:10:13.362 ============================ 00:10:13.362 Security Send/Receive: Not Supported 00:10:13.362 Format NVM: Not Supported 00:10:13.362 Firmware Activate/Download: Not Supported 00:10:13.362 Namespace Management: Not Supported 00:10:13.362 Device Self-Test: Not Supported 00:10:13.362 Directives: Not Supported 00:10:13.362 NVMe-MI: Not Supported 00:10:13.362 Virtualization Management: Not Supported 00:10:13.362 Doorbell Buffer Config: Not Supported 00:10:13.362 Get LBA Status Capability: Not Supported 00:10:13.362 Command & Feature Lockdown Capability: Not Supported 00:10:13.362 Abort Command Limit: 4 00:10:13.362 Async Event Request Limit: 4 00:10:13.362 Number of Firmware Slots: N/A 00:10:13.362 Firmware Slot 1 Read-Only: N/A 00:10:13.362 Firmware Activation Without Reset: N/A 00:10:13.362 Multiple Update Detection Support: N/A 00:10:13.362 Firmware Update Granularity: No Information Provided 00:10:13.362 Per-Namespace SMART Log: No 00:10:13.362 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.362 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:13.362 Command Effects Log Page: Supported 00:10:13.362 Get Log Page Extended Data: Supported 00:10:13.362 Telemetry Log Pages: Not Supported 00:10:13.362 Persistent Event Log Pages: Not Supported 00:10:13.362 Supported Log Pages Log Page: May Support 00:10:13.362 Commands Supported & Effects Log Page: Not Supported 00:10:13.362 Feature Identifiers & Effects Log Page:May Support 00:10:13.362 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.362 Data Area 4 for Telemetry Log: Not Supported 00:10:13.362 Error Log Page Entries Supported: 128 00:10:13.362 Keep Alive: Supported 00:10:13.362 Keep Alive Granularity: 10000 ms 00:10:13.362 00:10:13.362 NVM Command Set Attributes 00:10:13.362 ========================== 00:10:13.362 Submission Queue Entry Size 00:10:13.362 Max: 64 00:10:13.362 Min: 64 00:10:13.362 Completion Queue Entry Size 00:10:13.362 Max: 16 00:10:13.362 Min: 16 00:10:13.362 Number of Namespaces: 32 00:10:13.362 Compare Command: Supported 00:10:13.362 Write Uncorrectable Command: Not Supported 00:10:13.362 Dataset Management Command: Supported 00:10:13.362 Write Zeroes Command: Supported 00:10:13.362 Set Features Save Field: Not Supported 00:10:13.362 Reservations: Not Supported 00:10:13.362 Timestamp: Not Supported 00:10:13.362 Copy: Supported 00:10:13.362 Volatile Write Cache: Present 00:10:13.362 Atomic Write Unit (Normal): 1 00:10:13.362 Atomic Write Unit (PFail): 1 00:10:13.362 Atomic Compare & Write Unit: 1 00:10:13.362 Fused Compare & Write: Supported 00:10:13.362 Scatter-Gather List 00:10:13.362 SGL Command Set: Supported (Dword aligned) 00:10:13.362 SGL Keyed: Not Supported 00:10:13.362 SGL Bit Bucket Descriptor: Not Supported 00:10:13.362 SGL Metadata Pointer: Not Supported 00:10:13.362 Oversized SGL: Not Supported 00:10:13.362 SGL Metadata Address: Not Supported 00:10:13.362 SGL Offset: Not Supported 00:10:13.362 Transport SGL Data Block: Not Supported 00:10:13.362 Replay Protected Memory Block: Not Supported 00:10:13.362 00:10:13.362 Firmware Slot Information 00:10:13.362 ========================= 00:10:13.362 Active slot: 1 00:10:13.362 Slot 1 Firmware Revision: 24.09 00:10:13.362 00:10:13.362 00:10:13.362 Commands Supported and Effects 00:10:13.362 ============================== 00:10:13.362 Admin Commands 00:10:13.362 -------------- 00:10:13.362 Get Log Page (02h): Supported 00:10:13.362 Identify (06h): Supported 00:10:13.362 Abort (08h): Supported 00:10:13.362 Set Features (09h): Supported 00:10:13.362 Get Features (0Ah): Supported 00:10:13.362 Asynchronous Event Request (0Ch): Supported 00:10:13.363 Keep Alive (18h): Supported 00:10:13.363 I/O Commands 00:10:13.363 ------------ 00:10:13.363 Flush (00h): Supported LBA-Change 00:10:13.363 Write (01h): Supported LBA-Change 00:10:13.363 Read (02h): Supported 00:10:13.363 Compare (05h): Supported 00:10:13.363 Write Zeroes (08h): Supported LBA-Change 00:10:13.363 Dataset Management (09h): Supported LBA-Change 00:10:13.363 Copy (19h): Supported LBA-Change 00:10:13.363 00:10:13.363 Error Log 00:10:13.363 ========= 00:10:13.363 00:10:13.363 Arbitration 00:10:13.363 =========== 00:10:13.363 Arbitration Burst: 1 00:10:13.363 00:10:13.363 Power Management 00:10:13.363 ================ 00:10:13.363 Number of Power States: 1 00:10:13.363 Current Power State: Power State #0 00:10:13.363 Power State #0: 00:10:13.363 Max Power: 0.00 W 00:10:13.363 Non-Operational State: Operational 00:10:13.363 Entry Latency: Not Reported 00:10:13.363 Exit Latency: Not Reported 00:10:13.363 Relative Read Throughput: 0 00:10:13.363 Relative Read Latency: 0 00:10:13.363 Relative Write Throughput: 0 00:10:13.363 Relative Write Latency: 0 00:10:13.363 Idle Power: Not Reported 00:10:13.363 Active Power: Not Reported 00:10:13.363 Non-Operational Permissive Mode: Not Supported 00:10:13.363 00:10:13.363 Health Information 00:10:13.363 ================== 00:10:13.363 Critical Warnings: 00:10:13.363 Available Spare Space: OK 00:10:13.363 Temperature: OK 00:10:13.363 Device Reliability: OK 00:10:13.363 Read Only: No 00:10:13.363 Volatile Memory Backup: OK 00:10:13.363 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:13.363 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:13.363 Available Spare: 0% 00:10:13.363 Available Sp[2024-07-15 23:13:28.545957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:13.363 [2024-07-15 23:13:28.553768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:13.363 [2024-07-15 23:13:28.553822] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:13.363 [2024-07-15 23:13:28.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.363 [2024-07-15 23:13:28.553855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.363 [2024-07-15 23:13:28.553865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.363 [2024-07-15 23:13:28.553875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.363 [2024-07-15 23:13:28.553957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:13.363 [2024-07-15 23:13:28.553979] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:13.363 [2024-07-15 23:13:28.554963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:13.363 [2024-07-15 23:13:28.555037] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:13.363 [2024-07-15 23:13:28.555066] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:13.363 [2024-07-15 23:13:28.555975] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:13.363 [2024-07-15 23:13:28.555999] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:13.363 [2024-07-15 23:13:28.556066] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:13.363 [2024-07-15 23:13:28.557266] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.363 are Threshold: 0% 00:10:13.363 Life Percentage Used: 0% 00:10:13.363 Data Units Read: 0 00:10:13.363 Data Units Written: 0 00:10:13.363 Host Read Commands: 0 00:10:13.363 Host Write Commands: 0 00:10:13.363 Controller Busy Time: 0 minutes 00:10:13.363 Power Cycles: 0 00:10:13.363 Power On Hours: 0 hours 00:10:13.363 Unsafe Shutdowns: 0 00:10:13.363 Unrecoverable Media Errors: 0 00:10:13.363 Lifetime Error Log Entries: 0 00:10:13.363 Warning Temperature Time: 0 minutes 00:10:13.363 Critical Temperature Time: 0 minutes 00:10:13.363 00:10:13.363 Number of Queues 00:10:13.363 ================ 00:10:13.363 Number of I/O Submission Queues: 127 00:10:13.363 Number of I/O Completion Queues: 127 00:10:13.363 00:10:13.363 Active Namespaces 00:10:13.363 ================= 00:10:13.363 Namespace ID:1 00:10:13.363 Error Recovery Timeout: Unlimited 00:10:13.363 Command Set Identifier: NVM (00h) 00:10:13.363 Deallocate: Supported 00:10:13.363 Deallocated/Unwritten Error: Not Supported 00:10:13.363 Deallocated Read Value: Unknown 00:10:13.363 Deallocate in Write Zeroes: Not Supported 00:10:13.363 Deallocated Guard Field: 0xFFFF 00:10:13.363 Flush: Supported 00:10:13.363 Reservation: Supported 00:10:13.363 Namespace Sharing Capabilities: Multiple Controllers 00:10:13.363 Size (in LBAs): 131072 (0GiB) 00:10:13.363 Capacity (in LBAs): 131072 (0GiB) 00:10:13.363 Utilization (in LBAs): 131072 (0GiB) 00:10:13.363 NGUID: EFDBFD6DF7624F7889250248601FF806 00:10:13.363 UUID: efdbfd6d-f762-4f78-8925-0248601ff806 00:10:13.363 Thin Provisioning: Not Supported 00:10:13.363 Per-NS Atomic Units: Yes 00:10:13.363 Atomic Boundary Size (Normal): 0 00:10:13.363 Atomic Boundary Size (PFail): 0 00:10:13.363 Atomic Boundary Offset: 0 00:10:13.363 Maximum Single Source Range Length: 65535 00:10:13.363 Maximum Copy Length: 65535 00:10:13.363 Maximum Source Range Count: 1 00:10:13.363 NGUID/EUI64 Never Reused: No 00:10:13.363 Namespace Write Protected: No 00:10:13.363 Number of LBA Formats: 1 00:10:13.363 Current LBA Format: LBA Format #00 00:10:13.363 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.363 00:10:13.363 23:13:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:13.363 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.620 [2024-07-15 23:13:28.793599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:18.925 Initializing NVMe Controllers 00:10:18.925 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:18.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:18.925 Initialization complete. Launching workers. 00:10:18.925 ======================================================== 00:10:18.925 Latency(us) 00:10:18.925 Device Information : IOPS MiB/s Average min max 00:10:18.925 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34401.60 134.38 3720.42 1154.10 9941.56 00:10:18.925 ======================================================== 00:10:18.925 Total : 34401.60 134.38 3720.42 1154.10 9941.56 00:10:18.925 00:10:18.925 [2024-07-15 23:13:33.892142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:18.925 23:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:18.925 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.925 [2024-07-15 23:13:34.134934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:24.176 Initializing NVMe Controllers 00:10:24.176 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:24.176 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:24.176 Initialization complete. Launching workers. 00:10:24.176 ======================================================== 00:10:24.176 Latency(us) 00:10:24.176 Device Information : IOPS MiB/s Average min max 00:10:24.176 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33261.40 129.93 3849.99 1190.92 8160.56 00:10:24.176 ======================================================== 00:10:24.176 Total : 33261.40 129.93 3849.99 1190.92 8160.56 00:10:24.176 00:10:24.176 [2024-07-15 23:13:39.157111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:24.176 23:13:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:24.176 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.176 [2024-07-15 23:13:39.367096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:29.462 [2024-07-15 23:13:44.517918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:29.462 Initializing NVMe Controllers 00:10:29.462 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:29.462 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:29.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:29.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:29.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:29.462 Initialization complete. Launching workers. 00:10:29.462 Starting thread on core 2 00:10:29.462 Starting thread on core 3 00:10:29.462 Starting thread on core 1 00:10:29.462 23:13:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:29.462 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.720 [2024-07-15 23:13:44.832524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:33.001 [2024-07-15 23:13:47.917435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:33.001 Initializing NVMe Controllers 00:10:33.001 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.001 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.001 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:33.001 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:33.001 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:33.001 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:33.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:33.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:33.001 Initialization complete. Launching workers. 00:10:33.001 Starting thread on core 1 with urgent priority queue 00:10:33.001 Starting thread on core 2 with urgent priority queue 00:10:33.001 Starting thread on core 3 with urgent priority queue 00:10:33.001 Starting thread on core 0 with urgent priority queue 00:10:33.001 SPDK bdev Controller (SPDK2 ) core 0: 4264.33 IO/s 23.45 secs/100000 ios 00:10:33.001 SPDK bdev Controller (SPDK2 ) core 1: 4350.67 IO/s 22.98 secs/100000 ios 00:10:33.001 SPDK bdev Controller (SPDK2 ) core 2: 3965.00 IO/s 25.22 secs/100000 ios 00:10:33.001 SPDK bdev Controller (SPDK2 ) core 3: 4348.67 IO/s 23.00 secs/100000 ios 00:10:33.001 ======================================================== 00:10:33.001 00:10:33.001 23:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:33.001 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.001 [2024-07-15 23:13:48.227325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:33.001 Initializing NVMe Controllers 00:10:33.001 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.001 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.001 Namespace ID: 1 size: 0GB 00:10:33.001 Initialization complete. 00:10:33.001 INFO: using host memory buffer for IO 00:10:33.001 Hello world! 00:10:33.001 [2024-07-15 23:13:48.236387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:33.001 23:13:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:33.258 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.258 [2024-07-15 23:13:48.522676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.628 Initializing NVMe Controllers 00:10:34.628 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.628 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.628 Initialization complete. Launching workers. 00:10:34.628 submit (in ns) avg, min, max = 8085.8, 3548.9, 4015481.1 00:10:34.628 complete (in ns) avg, min, max = 27596.3, 2071.1, 4016636.7 00:10:34.628 00:10:34.628 Submit histogram 00:10:34.628 ================ 00:10:34.628 Range in us Cumulative Count 00:10:34.628 3.532 - 3.556: 0.0633% ( 8) 00:10:34.628 3.556 - 3.579: 1.1470% ( 137) 00:10:34.628 3.579 - 3.603: 2.7290% ( 200) 00:10:34.628 3.603 - 3.627: 6.9767% ( 537) 00:10:34.628 3.627 - 3.650: 13.2574% ( 794) 00:10:34.628 3.650 - 3.674: 22.2275% ( 1134) 00:10:34.628 3.674 - 3.698: 30.6755% ( 1068) 00:10:34.628 3.698 - 3.721: 40.1598% ( 1199) 00:10:34.628 3.721 - 3.745: 47.2394% ( 895) 00:10:34.628 3.745 - 3.769: 53.1720% ( 750) 00:10:34.628 3.769 - 3.793: 57.7440% ( 578) 00:10:34.628 3.793 - 3.816: 61.4302% ( 466) 00:10:34.628 3.816 - 3.840: 64.8711% ( 435) 00:10:34.628 3.840 - 3.864: 68.2329% ( 425) 00:10:34.628 3.864 - 3.887: 71.7924% ( 450) 00:10:34.628 3.887 - 3.911: 75.7475% ( 500) 00:10:34.628 3.911 - 3.935: 79.7500% ( 506) 00:10:34.628 3.935 - 3.959: 83.3175% ( 451) 00:10:34.628 3.959 - 3.982: 86.1652% ( 360) 00:10:34.628 3.982 - 4.006: 88.0952% ( 244) 00:10:34.628 4.006 - 4.030: 89.7880% ( 214) 00:10:34.628 4.030 - 4.053: 91.0774% ( 163) 00:10:34.628 4.053 - 4.077: 92.2243% ( 145) 00:10:34.628 4.077 - 4.101: 93.2447% ( 129) 00:10:34.628 4.101 - 4.124: 94.1465% ( 114) 00:10:34.628 4.124 - 4.148: 94.9771% ( 105) 00:10:34.628 4.148 - 4.172: 95.6178% ( 81) 00:10:34.628 4.172 - 4.196: 96.0291% ( 52) 00:10:34.628 4.196 - 4.219: 96.3692% ( 43) 00:10:34.628 4.219 - 4.243: 96.5986% ( 29) 00:10:34.628 4.243 - 4.267: 96.7568% ( 20) 00:10:34.628 4.267 - 4.290: 96.9150% ( 20) 00:10:34.628 4.290 - 4.314: 96.9783% ( 8) 00:10:34.628 4.314 - 4.338: 97.0732% ( 12) 00:10:34.628 4.338 - 4.361: 97.1286% ( 7) 00:10:34.628 4.361 - 4.385: 97.2156% ( 11) 00:10:34.628 4.385 - 4.409: 97.2947% ( 10) 00:10:34.628 4.409 - 4.433: 97.3264% ( 4) 00:10:34.628 4.433 - 4.456: 97.3738% ( 6) 00:10:34.628 4.456 - 4.480: 97.4134% ( 5) 00:10:34.628 4.480 - 4.504: 97.4450% ( 4) 00:10:34.628 4.504 - 4.527: 97.4767% ( 4) 00:10:34.628 4.527 - 4.551: 97.4925% ( 2) 00:10:34.628 4.551 - 4.575: 97.5083% ( 2) 00:10:34.628 4.575 - 4.599: 97.5162% ( 1) 00:10:34.628 4.670 - 4.693: 97.5320% ( 2) 00:10:34.628 4.693 - 4.717: 97.5637% ( 4) 00:10:34.628 4.717 - 4.741: 97.5716% ( 1) 00:10:34.628 4.741 - 4.764: 97.5953% ( 3) 00:10:34.628 4.764 - 4.788: 97.6270% ( 4) 00:10:34.628 4.788 - 4.812: 97.6665% ( 5) 00:10:34.628 4.812 - 4.836: 97.7061% ( 5) 00:10:34.628 4.836 - 4.859: 97.7456% ( 5) 00:10:34.628 4.859 - 4.883: 97.7852% ( 5) 00:10:34.628 4.883 - 4.907: 97.8247% ( 5) 00:10:34.628 4.907 - 4.930: 97.9038% ( 10) 00:10:34.628 4.930 - 4.954: 97.9434% ( 5) 00:10:34.628 4.954 - 4.978: 98.0066% ( 8) 00:10:34.628 4.978 - 5.001: 98.0383% ( 4) 00:10:34.628 5.001 - 5.025: 98.0620% ( 3) 00:10:34.628 5.025 - 5.049: 98.1253% ( 8) 00:10:34.628 5.049 - 5.073: 98.1886% ( 8) 00:10:34.629 5.073 - 5.096: 98.2202% ( 4) 00:10:34.629 5.096 - 5.120: 98.2360% ( 2) 00:10:34.629 5.120 - 5.144: 98.2439% ( 1) 00:10:34.629 5.144 - 5.167: 98.2756% ( 4) 00:10:34.629 5.167 - 5.191: 98.2993% ( 3) 00:10:34.629 5.191 - 5.215: 98.3072% ( 1) 00:10:34.629 5.215 - 5.239: 98.3310% ( 3) 00:10:34.629 5.239 - 5.262: 98.3468% ( 2) 00:10:34.629 5.286 - 5.310: 98.3626% ( 2) 00:10:34.629 5.333 - 5.357: 98.3705% ( 1) 00:10:34.629 5.357 - 5.381: 98.3784% ( 1) 00:10:34.629 5.381 - 5.404: 98.3863% ( 1) 00:10:34.629 5.499 - 5.523: 98.3942% ( 1) 00:10:34.629 5.831 - 5.855: 98.4022% ( 1) 00:10:34.629 5.879 - 5.902: 98.4101% ( 1) 00:10:34.629 6.021 - 6.044: 98.4180% ( 1) 00:10:34.629 6.044 - 6.068: 98.4259% ( 1) 00:10:34.629 6.068 - 6.116: 98.4417% ( 2) 00:10:34.629 6.353 - 6.400: 98.4496% ( 1) 00:10:34.629 6.447 - 6.495: 98.4575% ( 1) 00:10:34.629 6.827 - 6.874: 98.4654% ( 1) 00:10:34.629 6.874 - 6.921: 98.4813% ( 2) 00:10:34.629 6.969 - 7.016: 98.4892% ( 1) 00:10:34.629 7.111 - 7.159: 98.4971% ( 1) 00:10:34.629 7.301 - 7.348: 98.5050% ( 1) 00:10:34.629 7.443 - 7.490: 98.5129% ( 1) 00:10:34.629 7.490 - 7.538: 98.5208% ( 1) 00:10:34.629 7.538 - 7.585: 98.5287% ( 1) 00:10:34.629 7.585 - 7.633: 98.5366% ( 1) 00:10:34.629 7.633 - 7.680: 98.5445% ( 1) 00:10:34.629 7.680 - 7.727: 98.5604% ( 2) 00:10:34.629 7.727 - 7.775: 98.5683% ( 1) 00:10:34.629 7.775 - 7.822: 98.5762% ( 1) 00:10:34.629 7.870 - 7.917: 98.5920% ( 2) 00:10:34.629 7.917 - 7.964: 98.6157% ( 3) 00:10:34.629 8.012 - 8.059: 98.6236% ( 1) 00:10:34.629 8.059 - 8.107: 98.6315% ( 1) 00:10:34.629 8.296 - 8.344: 98.6632% ( 4) 00:10:34.629 8.391 - 8.439: 98.6790% ( 2) 00:10:34.629 8.628 - 8.676: 98.6869% ( 1) 00:10:34.629 8.676 - 8.723: 98.6948% ( 1) 00:10:34.629 8.723 - 8.770: 98.7027% ( 1) 00:10:34.629 8.818 - 8.865: 98.7106% ( 1) 00:10:34.629 8.913 - 8.960: 98.7186% ( 1) 00:10:34.629 9.007 - 9.055: 98.7423% ( 3) 00:10:34.629 9.150 - 9.197: 98.7502% ( 1) 00:10:34.629 9.481 - 9.529: 98.7581% ( 1) 00:10:34.629 9.624 - 9.671: 98.7660% ( 1) 00:10:34.629 9.671 - 9.719: 98.7739% ( 1) 00:10:34.629 9.719 - 9.766: 98.7818% ( 1) 00:10:34.629 9.813 - 9.861: 98.7897% ( 1) 00:10:34.629 9.908 - 9.956: 98.8135% ( 3) 00:10:34.629 10.098 - 10.145: 98.8214% ( 1) 00:10:34.629 10.240 - 10.287: 98.8293% ( 1) 00:10:34.629 10.430 - 10.477: 98.8372% ( 1) 00:10:34.629 10.714 - 10.761: 98.8451% ( 1) 00:10:34.629 10.856 - 10.904: 98.8530% ( 1) 00:10:34.629 11.046 - 11.093: 98.8609% ( 1) 00:10:34.629 11.236 - 11.283: 98.8688% ( 1) 00:10:34.629 11.473 - 11.520: 98.8768% ( 1) 00:10:34.629 11.710 - 11.757: 98.8847% ( 1) 00:10:34.629 11.899 - 11.947: 98.8926% ( 1) 00:10:34.629 11.947 - 11.994: 98.9005% ( 1) 00:10:34.629 11.994 - 12.041: 98.9084% ( 1) 00:10:34.629 12.041 - 12.089: 98.9163% ( 1) 00:10:34.629 12.895 - 12.990: 98.9242% ( 1) 00:10:34.629 12.990 - 13.084: 98.9321% ( 1) 00:10:34.629 13.084 - 13.179: 98.9400% ( 1) 00:10:34.629 13.464 - 13.559: 98.9480% ( 1) 00:10:34.629 13.843 - 13.938: 98.9638% ( 2) 00:10:34.629 14.886 - 14.981: 98.9717% ( 1) 00:10:34.629 17.351 - 17.446: 99.0033% ( 4) 00:10:34.629 17.446 - 17.541: 99.0271% ( 3) 00:10:34.629 17.541 - 17.636: 99.0429% ( 2) 00:10:34.629 17.636 - 17.730: 99.0824% ( 5) 00:10:34.629 17.730 - 17.825: 99.1220% ( 5) 00:10:34.629 17.825 - 17.920: 99.1615% ( 5) 00:10:34.629 17.920 - 18.015: 99.2169% ( 7) 00:10:34.629 18.015 - 18.110: 99.2485% ( 4) 00:10:34.629 18.110 - 18.204: 99.3118% ( 8) 00:10:34.629 18.204 - 18.299: 99.3830% ( 9) 00:10:34.629 18.299 - 18.394: 99.4938% ( 14) 00:10:34.629 18.394 - 18.489: 99.5491% ( 7) 00:10:34.629 18.489 - 18.584: 99.6045% ( 7) 00:10:34.629 18.584 - 18.679: 99.6678% ( 8) 00:10:34.629 18.679 - 18.773: 99.7231% ( 7) 00:10:34.629 18.773 - 18.868: 99.7548% ( 4) 00:10:34.629 18.868 - 18.963: 99.7785% ( 3) 00:10:34.629 18.963 - 19.058: 99.7943% ( 2) 00:10:34.629 19.058 - 19.153: 99.8102% ( 2) 00:10:34.629 19.153 - 19.247: 99.8181% ( 1) 00:10:34.629 19.247 - 19.342: 99.8260% ( 1) 00:10:34.629 19.437 - 19.532: 99.8339% ( 1) 00:10:34.629 19.532 - 19.627: 99.8497% ( 2) 00:10:34.629 19.721 - 19.816: 99.8576% ( 1) 00:10:34.629 21.428 - 21.523: 99.8655% ( 1) 00:10:34.629 22.566 - 22.661: 99.8734% ( 1) 00:10:34.629 23.230 - 23.324: 99.8813% ( 1) 00:10:34.629 24.652 - 24.841: 99.8893% ( 1) 00:10:34.629 25.031 - 25.221: 99.8972% ( 1) 00:10:34.629 3980.705 - 4004.978: 99.9842% ( 11) 00:10:34.629 4004.978 - 4029.250: 100.0000% ( 2) 00:10:34.629 00:10:34.629 Complete histogram 00:10:34.629 ================== 00:10:34.629 Range in us Cumulative Count 00:10:34.629 2.062 - 2.074: 0.0791% ( 10) 00:10:34.629 2.074 - 2.086: 7.1429% ( 893) 00:10:34.629 2.086 - 2.098: 16.9673% ( 1242) 00:10:34.629 2.098 - 2.110: 22.6546% ( 719) 00:10:34.629 2.110 - 2.121: 49.6124% ( 3408) 00:10:34.629 2.121 - 2.133: 58.6695% ( 1145) 00:10:34.629 2.133 - 2.145: 61.9285% ( 412) 00:10:34.629 2.145 - 2.157: 66.8328% ( 620) 00:10:34.629 2.157 - 2.169: 69.0318% ( 278) 00:10:34.629 2.169 - 2.181: 71.9506% ( 369) 00:10:34.629 2.181 - 2.193: 78.5319% ( 832) 00:10:34.629 2.193 - 2.204: 81.3795% ( 360) 00:10:34.629 2.204 - 2.216: 82.4237% ( 132) 00:10:34.629 2.216 - 2.228: 84.2667% ( 233) 00:10:34.629 2.228 - 2.240: 85.3583% ( 138) 00:10:34.629 2.240 - 2.252: 87.4703% ( 267) 00:10:34.629 2.252 - 2.264: 90.9666% ( 442) 00:10:34.629 2.264 - 2.276: 92.4379% ( 186) 00:10:34.629 2.276 - 2.287: 93.2685% ( 105) 00:10:34.629 2.287 - 2.299: 93.7826% ( 65) 00:10:34.629 2.299 - 2.311: 94.0358% ( 32) 00:10:34.629 2.311 - 2.323: 94.5341% ( 63) 00:10:34.629 2.323 - 2.335: 94.8030% ( 34) 00:10:34.629 2.335 - 2.347: 95.0087% ( 26) 00:10:34.629 2.347 - 2.359: 95.2856% ( 35) 00:10:34.629 2.359 - 2.370: 95.4121% ( 16) 00:10:34.629 2.370 - 2.382: 95.5308% ( 15) 00:10:34.629 2.382 - 2.394: 95.6969% ( 21) 00:10:34.630 2.394 - 2.406: 96.0607% ( 46) 00:10:34.630 2.406 - 2.418: 96.3692% ( 39) 00:10:34.630 2.418 - 2.430: 96.7173% ( 44) 00:10:34.630 2.430 - 2.441: 97.0574% ( 43) 00:10:34.630 2.441 - 2.453: 97.2947% ( 30) 00:10:34.630 2.453 - 2.465: 97.5083% ( 27) 00:10:34.630 2.465 - 2.477: 97.6349% ( 16) 00:10:34.630 2.477 - 2.489: 97.7852% ( 19) 00:10:34.630 2.489 - 2.501: 97.9355% ( 19) 00:10:34.630 2.501 - 2.513: 98.0462% ( 14) 00:10:34.630 2.513 - 2.524: 98.1253% ( 10) 00:10:34.630 2.524 - 2.536: 98.1728% ( 6) 00:10:34.630 2.536 - 2.548: 98.2202% ( 6) 00:10:34.630 2.548 - 2.560: 98.2598% ( 5) 00:10:34.630 2.560 - 2.572: 98.2756% ( 2) 00:10:34.630 2.572 - 2.584: 98.2993% ( 3) 00:10:34.630 2.631 - 2.643: 98.3072% ( 1) 00:10:34.630 2.655 - 2.667: 98.3151% ( 1) 00:10:34.630 2.667 - 2.679: 98.3310% ( 2) 00:10:34.630 2.726 - 2.738: 98.3389% ( 1) 00:10:34.630 2.975 - 2.987: 98.3468% ( 1) 00:10:34.630 3.437 - 3.461: 98.3547% ( 1) 00:10:34.630 3.484 - 3.508: 98.3626% ( 1) 00:10:34.630 3.508 - 3.532: 98.3705% ( 1) 00:10:34.630 3.556 - 3.579: 98.3863% ( 2) 00:10:34.630 3.579 - 3.603: 98.3942% ( 1) 00:10:34.630 3.627 - 3.650: 98.4022% ( 1) 00:10:34.630 3.650 - 3.674: 98.4101% ( 1) 00:10:34.630 3.698 - 3.721: 98.4259% ( 2) 00:10:34.630 3.721 - 3.745: 98.4338% ( 1) 00:10:34.630 3.745 - 3.769: 98.4575% ( 3) 00:10:34.630 3.793 - 3.816: 98.4654% ( 1) 00:10:34.630 3.816 - 3.840: 98.4733% ( 1) 00:10:34.630 3.840 - 3.864: 98.4813% ( 1) 00:10:34.630 3.935 - 3.959: 98.4892% ( 1) 00:10:34.630 3.982 - 4.006: 98.5050% ( 2) 00:10:34.630 4.006 - 4.030: 98.5129% ( 1) 00:10:34.630 4.053 - 4.077: 98.5208% ( 1) 00:10:34.630 4.101 - 4.124: 98.5287% ( 1) 00:10:34.630 4.124 - 4.148: 98.5445% ( 2) 00:10:34.630 4.148 - 4.172: 98.5524% ( 1) 00:10:34.630 4.196 - 4.219: 98.5604% ( 1) 00:10:34.630 5.807 - 5.831: 98.5683% ( 1) 00:10:34.630 5.902 - 5.926: 98.5841% ( 2) 00:10:34.630 6.044 - 6.068: 98.5920% ( 1) 00:10:34.630 6.068 - 6.116: 98.5999% ( 1) 00:10:34.630 6.210 - 6.258: 98.6078% ( 1) 00:10:34.630 6.258 - 6.305: 98.6157% ( 1) 00:10:34.630 6.400 - 6.447: 98.6236% ( 1) 00:10:34.630 6.684 - 6.732: 98.6315% ( 1) 00:10:34.630 6.874 - 6.921: 98.6395% ( 1) 00:10:34.630 7.396 - 7.443: 98.6474% ( 1) 00:10:34.630 7.490 - 7.538: 98.6553% ( 1) 00:10:34.630 7.538 - 7.585: 98.6632% ( 1) 00:10:34.630 7.822 - 7.870: 98.6711% ( 1) 00:10:34.630 8.296 - 8.344: 98.6790% ( 1) 00:10:34.630 15.360 - 15.455: 98.6869% ( 1) 00:10:34.630 15.644 - 15.739: 98.6948% ( 1) 00:10:34.630 15.739 - 15.834: 98.7423% ( 6) 00:10:34.630 15.834 - 15.929: 9[2024-07-15 23:13:49.623530] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.630 8.7502% ( 1) 00:10:34.630 15.929 - 16.024: 98.7897% ( 5) 00:10:34.630 16.024 - 16.119: 98.8056% ( 2) 00:10:34.630 16.119 - 16.213: 98.8451% ( 5) 00:10:34.630 16.213 - 16.308: 98.8768% ( 4) 00:10:34.630 16.308 - 16.403: 98.9242% ( 6) 00:10:34.630 16.403 - 16.498: 98.9559% ( 4) 00:10:34.630 16.498 - 16.593: 99.0350% ( 10) 00:10:34.630 16.593 - 16.687: 99.1062% ( 9) 00:10:34.630 16.687 - 16.782: 99.1378% ( 4) 00:10:34.630 16.782 - 16.877: 99.1853% ( 6) 00:10:34.630 16.877 - 16.972: 99.2090% ( 3) 00:10:34.630 16.972 - 17.067: 99.2406% ( 4) 00:10:34.630 17.067 - 17.161: 99.2485% ( 1) 00:10:34.630 17.256 - 17.351: 99.2564% ( 1) 00:10:34.630 17.351 - 17.446: 99.2644% ( 1) 00:10:34.630 17.541 - 17.636: 99.2723% ( 1) 00:10:34.630 17.636 - 17.730: 99.3039% ( 4) 00:10:34.630 17.730 - 17.825: 99.3118% ( 1) 00:10:34.630 18.204 - 18.299: 99.3197% ( 1) 00:10:34.630 18.394 - 18.489: 99.3276% ( 1) 00:10:34.630 18.773 - 18.868: 99.3355% ( 1) 00:10:34.630 18.868 - 18.963: 99.3435% ( 1) 00:10:34.630 19.437 - 19.532: 99.3514% ( 1) 00:10:34.630 27.307 - 27.496: 99.3593% ( 1) 00:10:34.630 122.121 - 122.880: 99.3672% ( 1) 00:10:34.630 3980.705 - 4004.978: 99.8418% ( 60) 00:10:34.630 4004.978 - 4029.250: 100.0000% ( 20) 00:10:34.630 00:10:34.630 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:34.630 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:34.630 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:34.630 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:34.630 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:34.888 [ 00:10:34.888 { 00:10:34.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:34.888 "subtype": "Discovery", 00:10:34.888 "listen_addresses": [], 00:10:34.888 "allow_any_host": true, 00:10:34.888 "hosts": [] 00:10:34.888 }, 00:10:34.888 { 00:10:34.888 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:34.888 "subtype": "NVMe", 00:10:34.888 "listen_addresses": [ 00:10:34.888 { 00:10:34.888 "trtype": "VFIOUSER", 00:10:34.888 "adrfam": "IPv4", 00:10:34.888 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:34.888 "trsvcid": "0" 00:10:34.888 } 00:10:34.888 ], 00:10:34.888 "allow_any_host": true, 00:10:34.888 "hosts": [], 00:10:34.888 "serial_number": "SPDK1", 00:10:34.888 "model_number": "SPDK bdev Controller", 00:10:34.888 "max_namespaces": 32, 00:10:34.888 "min_cntlid": 1, 00:10:34.888 "max_cntlid": 65519, 00:10:34.888 "namespaces": [ 00:10:34.888 { 00:10:34.888 "nsid": 1, 00:10:34.888 "bdev_name": "Malloc1", 00:10:34.888 "name": "Malloc1", 00:10:34.888 "nguid": "B63DC92A4A5041DDA86BB0570736C214", 00:10:34.888 "uuid": "b63dc92a-4a50-41dd-a86b-b0570736c214" 00:10:34.888 }, 00:10:34.888 { 00:10:34.888 "nsid": 2, 00:10:34.888 "bdev_name": "Malloc3", 00:10:34.888 "name": "Malloc3", 00:10:34.888 "nguid": "1C936976F15F4C1C8BCE6D91232FABF2", 00:10:34.888 "uuid": "1c936976-f15f-4c1c-8bce-6d91232fabf2" 00:10:34.888 } 00:10:34.888 ] 00:10:34.888 }, 00:10:34.888 { 00:10:34.888 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:34.888 "subtype": "NVMe", 00:10:34.888 "listen_addresses": [ 00:10:34.888 { 00:10:34.888 "trtype": "VFIOUSER", 00:10:34.888 "adrfam": "IPv4", 00:10:34.888 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:34.888 "trsvcid": "0" 00:10:34.888 } 00:10:34.888 ], 00:10:34.888 "allow_any_host": true, 00:10:34.888 "hosts": [], 00:10:34.888 "serial_number": "SPDK2", 00:10:34.888 "model_number": "SPDK bdev Controller", 00:10:34.888 "max_namespaces": 32, 00:10:34.888 "min_cntlid": 1, 00:10:34.888 "max_cntlid": 65519, 00:10:34.888 "namespaces": [ 00:10:34.888 { 00:10:34.888 "nsid": 1, 00:10:34.888 "bdev_name": "Malloc2", 00:10:34.888 "name": "Malloc2", 00:10:34.888 "nguid": "EFDBFD6DF7624F7889250248601FF806", 00:10:34.888 "uuid": "efdbfd6d-f762-4f78-8925-0248601ff806" 00:10:34.888 } 00:10:34.888 ] 00:10:34.888 } 00:10:34.888 ] 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2286459 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.888 23:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.889 23:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:34.889 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:34.889 23:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:34.889 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.889 [2024-07-15 23:13:50.129295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.147 Malloc4 00:10:35.147 23:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:35.404 [2024-07-15 23:13:50.490139] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.404 23:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:35.404 Asynchronous Event Request test 00:10:35.404 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.404 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.404 Registering asynchronous event callbacks... 00:10:35.404 Starting namespace attribute notice tests for all controllers... 00:10:35.404 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:35.404 aer_cb - Changed Namespace 00:10:35.404 Cleaning up... 00:10:35.662 [ 00:10:35.662 { 00:10:35.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:35.662 "subtype": "Discovery", 00:10:35.662 "listen_addresses": [], 00:10:35.662 "allow_any_host": true, 00:10:35.662 "hosts": [] 00:10:35.662 }, 00:10:35.662 { 00:10:35.662 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:35.662 "subtype": "NVMe", 00:10:35.662 "listen_addresses": [ 00:10:35.662 { 00:10:35.662 "trtype": "VFIOUSER", 00:10:35.662 "adrfam": "IPv4", 00:10:35.662 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:35.662 "trsvcid": "0" 00:10:35.662 } 00:10:35.662 ], 00:10:35.662 "allow_any_host": true, 00:10:35.662 "hosts": [], 00:10:35.662 "serial_number": "SPDK1", 00:10:35.662 "model_number": "SPDK bdev Controller", 00:10:35.662 "max_namespaces": 32, 00:10:35.662 "min_cntlid": 1, 00:10:35.662 "max_cntlid": 65519, 00:10:35.662 "namespaces": [ 00:10:35.662 { 00:10:35.662 "nsid": 1, 00:10:35.662 "bdev_name": "Malloc1", 00:10:35.662 "name": "Malloc1", 00:10:35.662 "nguid": "B63DC92A4A5041DDA86BB0570736C214", 00:10:35.662 "uuid": "b63dc92a-4a50-41dd-a86b-b0570736c214" 00:10:35.662 }, 00:10:35.662 { 00:10:35.662 "nsid": 2, 00:10:35.662 "bdev_name": "Malloc3", 00:10:35.662 "name": "Malloc3", 00:10:35.662 "nguid": "1C936976F15F4C1C8BCE6D91232FABF2", 00:10:35.662 "uuid": "1c936976-f15f-4c1c-8bce-6d91232fabf2" 00:10:35.662 } 00:10:35.662 ] 00:10:35.662 }, 00:10:35.662 { 00:10:35.662 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:35.662 "subtype": "NVMe", 00:10:35.662 "listen_addresses": [ 00:10:35.662 { 00:10:35.662 "trtype": "VFIOUSER", 00:10:35.662 "adrfam": "IPv4", 00:10:35.662 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:35.662 "trsvcid": "0" 00:10:35.662 } 00:10:35.662 ], 00:10:35.662 "allow_any_host": true, 00:10:35.662 "hosts": [], 00:10:35.662 "serial_number": "SPDK2", 00:10:35.662 "model_number": "SPDK bdev Controller", 00:10:35.662 "max_namespaces": 32, 00:10:35.662 "min_cntlid": 1, 00:10:35.662 "max_cntlid": 65519, 00:10:35.662 "namespaces": [ 00:10:35.662 { 00:10:35.662 "nsid": 1, 00:10:35.662 "bdev_name": "Malloc2", 00:10:35.662 "name": "Malloc2", 00:10:35.662 "nguid": "EFDBFD6DF7624F7889250248601FF806", 00:10:35.663 "uuid": "efdbfd6d-f762-4f78-8925-0248601ff806" 00:10:35.663 }, 00:10:35.663 { 00:10:35.663 "nsid": 2, 00:10:35.663 "bdev_name": "Malloc4", 00:10:35.663 "name": "Malloc4", 00:10:35.663 "nguid": "4D1A25595C424AECB2262E3C9E9E21A9", 00:10:35.663 "uuid": "4d1a2559-5c42-4aec-b226-2e3c9e9e21a9" 00:10:35.663 } 00:10:35.663 ] 00:10:35.663 } 00:10:35.663 ] 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2286459 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2280844 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2280844 ']' 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2280844 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2280844 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2280844' 00:10:35.663 killing process with pid 2280844 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2280844 00:10:35.663 23:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2280844 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2286622 00:10:35.919 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2286622' 00:10:35.920 Process pid: 2286622 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2286622 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2286622 ']' 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.920 23:13:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:35.920 [2024-07-15 23:13:51.221835] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:35.920 [2024-07-15 23:13:51.222866] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:10:35.920 [2024-07-15 23:13:51.222940] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.177 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.177 [2024-07-15 23:13:51.287189] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.177 [2024-07-15 23:13:51.403918] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.177 [2024-07-15 23:13:51.403978] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.177 [2024-07-15 23:13:51.404005] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.177 [2024-07-15 23:13:51.404019] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.177 [2024-07-15 23:13:51.404030] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.177 [2024-07-15 23:13:51.404132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.177 [2024-07-15 23:13:51.404185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.177 [2024-07-15 23:13:51.404301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.177 [2024-07-15 23:13:51.404303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.434 [2024-07-15 23:13:51.510528] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:36.434 [2024-07-15 23:13:51.510723] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:36.434 [2024-07-15 23:13:51.511012] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:36.434 [2024-07-15 23:13:51.511662] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:36.434 [2024-07-15 23:13:51.511929] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:36.999 23:13:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.999 23:13:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:36.999 23:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:37.930 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:38.186 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:38.186 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:38.186 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:38.186 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:38.186 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:38.444 Malloc1 00:10:38.444 23:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:38.702 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:39.266 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:39.266 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.266 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:39.523 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:39.780 Malloc2 00:10:39.780 23:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:40.037 23:13:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:40.294 23:13:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2286622 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2286622 ']' 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2286622 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2286622 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2286622' 00:10:40.552 killing process with pid 2286622 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2286622 00:10:40.552 23:13:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2286622 00:10:40.810 23:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:40.810 23:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:40.810 00:10:40.810 real 0m54.141s 00:10:40.810 user 3m33.408s 00:10:40.810 sys 0m4.733s 00:10:40.810 23:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.810 23:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:40.810 ************************************ 00:10:40.810 END TEST nvmf_vfio_user 00:10:40.810 ************************************ 00:10:40.810 23:13:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.810 23:13:56 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:40.810 23:13:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.810 23:13:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.810 23:13:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.810 ************************************ 00:10:40.810 START TEST nvmf_vfio_user_nvme_compliance 00:10:40.810 ************************************ 00:10:40.811 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:41.069 * Looking for test storage... 00:10:41.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2287333 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2287333' 00:10:41.069 Process pid: 2287333 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2287333 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2287333 ']' 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.069 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:41.069 [2024-07-15 23:13:56.197779] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:10:41.070 [2024-07-15 23:13:56.197856] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.070 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.070 [2024-07-15 23:13:56.255093] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.070 [2024-07-15 23:13:56.362350] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.070 [2024-07-15 23:13:56.362405] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.070 [2024-07-15 23:13:56.362418] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.070 [2024-07-15 23:13:56.362429] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.070 [2024-07-15 23:13:56.362439] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.070 [2024-07-15 23:13:56.362500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.070 [2024-07-15 23:13:56.362557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.070 [2024-07-15 23:13:56.362560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.327 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.327 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:41.327 23:13:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.259 malloc0 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.259 23:13:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:42.516 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.516 00:10:42.516 00:10:42.516 CUnit - A unit testing framework for C - Version 2.1-3 00:10:42.516 http://cunit.sourceforge.net/ 00:10:42.516 00:10:42.516 00:10:42.516 Suite: nvme_compliance 00:10:42.516 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 23:13:57.714319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.516 [2024-07-15 23:13:57.715824] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:42.516 [2024-07-15 23:13:57.715850] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:42.516 [2024-07-15 23:13:57.715862] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:42.516 [2024-07-15 23:13:57.717341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.516 passed 00:10:42.516 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 23:13:57.802949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.516 [2024-07-15 23:13:57.805970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.773 passed 00:10:42.773 Test: admin_identify_ns ...[2024-07-15 23:13:57.893202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.773 [2024-07-15 23:13:57.952757] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:42.773 [2024-07-15 23:13:57.960771] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:42.773 [2024-07-15 23:13:57.981877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.773 passed 00:10:42.773 Test: admin_get_features_mandatory_features ...[2024-07-15 23:13:58.065579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.773 [2024-07-15 23:13:58.068596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.030 passed 00:10:43.030 Test: admin_get_features_optional_features ...[2024-07-15 23:13:58.154199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.030 [2024-07-15 23:13:58.157217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.030 passed 00:10:43.030 Test: admin_set_features_number_of_queues ...[2024-07-15 23:13:58.239358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.030 [2024-07-15 23:13:58.342872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.288 passed 00:10:43.288 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 23:13:58.428085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.288 [2024-07-15 23:13:58.431107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.288 passed 00:10:43.288 Test: admin_get_log_page_with_lpo ...[2024-07-15 23:13:58.513265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.288 [2024-07-15 23:13:58.581772] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:43.288 [2024-07-15 23:13:58.594832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.570 passed 00:10:43.570 Test: fabric_property_get ...[2024-07-15 23:13:58.676875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.570 [2024-07-15 23:13:58.678173] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:43.570 [2024-07-15 23:13:58.679898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.570 passed 00:10:43.570 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 23:13:58.765440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.570 [2024-07-15 23:13:58.766747] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:43.570 [2024-07-15 23:13:58.768459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.570 passed 00:10:43.570 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 23:13:58.850556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.827 [2024-07-15 23:13:58.934762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:43.827 [2024-07-15 23:13:58.953766] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:43.827 [2024-07-15 23:13:58.958853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.827 passed 00:10:43.827 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 23:13:59.041006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.827 [2024-07-15 23:13:59.042339] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:43.827 [2024-07-15 23:13:59.044041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.827 passed 00:10:43.827 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 23:13:59.128675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.084 [2024-07-15 23:13:59.205749] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:44.084 [2024-07-15 23:13:59.229751] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:44.084 [2024-07-15 23:13:59.234843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.084 passed 00:10:44.084 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 23:13:59.319616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.084 [2024-07-15 23:13:59.320959] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:44.084 [2024-07-15 23:13:59.321000] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:44.084 [2024-07-15 23:13:59.322637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.084 passed 00:10:44.341 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 23:13:59.402773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.341 [2024-07-15 23:13:59.496775] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:44.341 [2024-07-15 23:13:59.504758] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:44.341 [2024-07-15 23:13:59.512753] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:44.341 [2024-07-15 23:13:59.520746] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:44.341 [2024-07-15 23:13:59.549866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.341 passed 00:10:44.341 Test: admin_create_io_sq_verify_pc ...[2024-07-15 23:13:59.633441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.341 [2024-07-15 23:13:59.649761] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:44.630 [2024-07-15 23:13:59.667773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.630 passed 00:10:44.631 Test: admin_create_io_qp_max_qps ...[2024-07-15 23:13:59.753352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.579 [2024-07-15 23:14:00.849757] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:46.143 [2024-07-15 23:14:01.242101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:46.143 passed 00:10:46.143 Test: admin_create_io_sq_shared_cq ...[2024-07-15 23:14:01.323318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:46.143 [2024-07-15 23:14:01.455747] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:46.400 [2024-07-15 23:14:01.492841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:46.400 passed 00:10:46.400 00:10:46.400 Run Summary: Type Total Ran Passed Failed Inactive 00:10:46.400 suites 1 1 n/a 0 0 00:10:46.400 tests 18 18 18 0 0 00:10:46.400 asserts 360 360 360 0 n/a 00:10:46.400 00:10:46.400 Elapsed time = 1.567 seconds 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2287333 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2287333 ']' 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2287333 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2287333 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2287333' 00:10:46.400 killing process with pid 2287333 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2287333 00:10:46.400 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2287333 00:10:46.656 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:46.656 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:46.656 00:10:46.656 real 0m5.804s 00:10:46.656 user 0m16.274s 00:10:46.656 sys 0m0.558s 00:10:46.656 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.656 23:14:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:46.656 ************************************ 00:10:46.656 END TEST nvmf_vfio_user_nvme_compliance 00:10:46.656 ************************************ 00:10:46.656 23:14:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.656 23:14:01 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:46.656 23:14:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.656 23:14:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.656 23:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.656 ************************************ 00:10:46.656 START TEST nvmf_vfio_user_fuzz 00:10:46.656 ************************************ 00:10:46.656 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:46.914 * Looking for test storage... 00:10:46.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.914 23:14:01 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2288055 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2288055' 00:10:46.914 Process pid: 2288055 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2288055 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2288055 ']' 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.914 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:47.172 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.172 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:47.172 23:14:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.108 malloc0 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:48.108 23:14:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:20.174 Fuzzing completed. Shutting down the fuzz application 00:11:20.174 00:11:20.174 Dumping successful admin opcodes: 00:11:20.174 8, 9, 10, 24, 00:11:20.174 Dumping successful io opcodes: 00:11:20.174 0, 00:11:20.174 NS: 0x200003a1ef00 I/O qp, Total commands completed: 595414, total successful commands: 2301, random_seed: 2594413504 00:11:20.174 NS: 0x200003a1ef00 admin qp, Total commands completed: 95769, total successful commands: 778, random_seed: 3245708160 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2288055 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2288055 ']' 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2288055 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2288055 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2288055' 00:11:20.174 killing process with pid 2288055 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2288055 00:11:20.174 23:14:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2288055 00:11:20.174 23:14:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:20.174 23:14:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:20.174 00:11:20.174 real 0m32.344s 00:11:20.174 user 0m31.589s 00:11:20.174 sys 0m28.898s 00:11:20.174 23:14:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.174 23:14:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:20.174 ************************************ 00:11:20.174 END TEST nvmf_vfio_user_fuzz 00:11:20.174 ************************************ 00:11:20.174 23:14:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:20.174 23:14:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:20.174 23:14:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.174 23:14:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.174 23:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.174 ************************************ 00:11:20.174 START TEST nvmf_host_management 00:11:20.174 ************************************ 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:20.174 * Looking for test storage... 00:11:20.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.174 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.175 23:14:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.111 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:21.112 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:21.112 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:21.112 Found net devices under 0000:84:00.0: cvl_0_0 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:21.112 Found net devices under 0000:84:00.1: cvl_0_1 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.112 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:11:21.371 00:11:21.371 --- 10.0.0.2 ping statistics --- 00:11:21.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.371 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:21.371 00:11:21.371 --- 10.0.0.1 ping statistics --- 00:11:21.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.371 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2293540 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2293540 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2293540 ']' 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.371 23:14:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.371 [2024-07-15 23:14:36.575059] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:21.371 [2024-07-15 23:14:36.575146] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.371 [2024-07-15 23:14:36.646201] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.629 [2024-07-15 23:14:36.770056] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.629 [2024-07-15 23:14:36.770126] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.629 [2024-07-15 23:14:36.770143] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.629 [2024-07-15 23:14:36.770157] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.629 [2024-07-15 23:14:36.770169] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.629 [2024-07-15 23:14:36.770255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.629 [2024-07-15 23:14:36.770284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.629 [2024-07-15 23:14:36.770337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.629 [2024-07-15 23:14:36.770340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 [2024-07-15 23:14:37.535687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 Malloc0 00:11:22.560 [2024-07-15 23:14:37.595916] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2293717 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2293717 /var/tmp/bdevperf.sock 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2293717 ']' 00:11:22.560 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.561 { 00:11:22.561 "params": { 00:11:22.561 "name": "Nvme$subsystem", 00:11:22.561 "trtype": "$TEST_TRANSPORT", 00:11:22.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.561 "adrfam": "ipv4", 00:11:22.561 "trsvcid": "$NVMF_PORT", 00:11:22.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.561 "hdgst": ${hdgst:-false}, 00:11:22.561 "ddgst": ${ddgst:-false} 00:11:22.561 }, 00:11:22.561 "method": "bdev_nvme_attach_controller" 00:11:22.561 } 00:11:22.561 EOF 00:11:22.561 )") 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:22.561 23:14:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.561 "params": { 00:11:22.561 "name": "Nvme0", 00:11:22.561 "trtype": "tcp", 00:11:22.561 "traddr": "10.0.0.2", 00:11:22.561 "adrfam": "ipv4", 00:11:22.561 "trsvcid": "4420", 00:11:22.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:22.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:22.561 "hdgst": false, 00:11:22.561 "ddgst": false 00:11:22.561 }, 00:11:22.561 "method": "bdev_nvme_attach_controller" 00:11:22.561 }' 00:11:22.561 [2024-07-15 23:14:37.676409] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:22.561 [2024-07-15 23:14:37.676495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293717 ] 00:11:22.561 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.561 [2024-07-15 23:14:37.738403] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.561 [2024-07-15 23:14:37.849580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.818 Running I/O for 10 seconds... 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.818 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.076 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.076 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:23.076 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:23.076 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.334 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.335 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.335 [2024-07-15 23:14:38.455164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.455978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.335 [2024-07-15 23:14:38.456470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.335 [2024-07-15 23:14:38.456486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.456973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.456992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.336 [2024-07-15 23:14:38.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.336 [2024-07-15 23:14:38.457350] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x128a200 was disconnected and freed. reset controller. 00:11:23.336 [2024-07-15 23:14:38.458549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.336 task offset: 72064 on job bdev=Nvme0n1 fails 00:11:23.336 00:11:23.336 Latency(us) 00:11:23.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.336 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:23.336 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:23.336 Verification LBA range: start 0x0 length 0x400 00:11:23.336 Nvme0n1 : 0.39 1327.71 82.98 165.96 0.00 41625.53 2803.48 34952.53 00:11:23.336 =================================================================================================================== 00:11:23.336 Total : 1327.71 82.98 165.96 0.00 41625.53 2803.48 34952.53 00:11:23.336 [2024-07-15 23:14:38.460487] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:23.336 [2024-07-15 23:14:38.460515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe79080 (9): Bad file descriptor 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.336 23:14:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:23.336 [2024-07-15 23:14:38.513489] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2293717 00:11:24.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2293717) - No such process 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:24.268 { 00:11:24.268 "params": { 00:11:24.268 "name": "Nvme$subsystem", 00:11:24.268 "trtype": "$TEST_TRANSPORT", 00:11:24.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.268 "adrfam": "ipv4", 00:11:24.268 "trsvcid": "$NVMF_PORT", 00:11:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.268 "hdgst": ${hdgst:-false}, 00:11:24.268 "ddgst": ${ddgst:-false} 00:11:24.268 }, 00:11:24.268 "method": "bdev_nvme_attach_controller" 00:11:24.268 } 00:11:24.268 EOF 00:11:24.268 )") 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:24.268 23:14:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:24.268 "params": { 00:11:24.268 "name": "Nvme0", 00:11:24.268 "trtype": "tcp", 00:11:24.268 "traddr": "10.0.0.2", 00:11:24.268 "adrfam": "ipv4", 00:11:24.268 "trsvcid": "4420", 00:11:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:24.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:24.268 "hdgst": false, 00:11:24.268 "ddgst": false 00:11:24.268 }, 00:11:24.268 "method": "bdev_nvme_attach_controller" 00:11:24.268 }' 00:11:24.268 [2024-07-15 23:14:39.511797] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:24.268 [2024-07-15 23:14:39.511889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293987 ] 00:11:24.268 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.268 [2024-07-15 23:14:39.571886] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.526 [2024-07-15 23:14:39.686180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.782 Running I/O for 1 seconds... 00:11:25.712 00:11:25.712 Latency(us) 00:11:25.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.712 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:25.712 Verification LBA range: start 0x0 length 0x400 00:11:25.712 Nvme0n1 : 1.03 1369.77 85.61 0.00 0.00 46028.52 8204.14 36505.98 00:11:25.712 =================================================================================================================== 00:11:25.712 Total : 1369.77 85.61 0.00 0.00 46028.52 8204.14 36505.98 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:25.969 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.970 rmmod nvme_tcp 00:11:25.970 rmmod nvme_fabrics 00:11:25.970 rmmod nvme_keyring 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2293540 ']' 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2293540 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2293540 ']' 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2293540 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2293540 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2293540' 00:11:25.970 killing process with pid 2293540 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2293540 00:11:25.970 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2293540 00:11:26.227 [2024-07-15 23:14:41.528708] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.485 23:14:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.382 23:14:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.382 23:14:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:28.382 00:11:28.382 real 0m9.277s 00:11:28.382 user 0m22.047s 00:11:28.382 sys 0m2.753s 00:11:28.382 23:14:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.382 23:14:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.382 ************************************ 00:11:28.382 END TEST nvmf_host_management 00:11:28.382 ************************************ 00:11:28.382 23:14:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:28.382 23:14:43 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:28.382 23:14:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:28.382 23:14:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.382 23:14:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.382 ************************************ 00:11:28.382 START TEST nvmf_lvol 00:11:28.382 ************************************ 00:11:28.382 23:14:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:28.639 * Looking for test storage... 00:11:28.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.639 23:14:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:30.580 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:30.580 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:30.580 Found net devices under 0000:84:00.0: cvl_0_0 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:30.580 Found net devices under 0000:84:00.1: cvl_0_1 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:30.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:30.580 00:11:30.580 --- 10.0.0.2 ping statistics --- 00:11:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.580 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:11:30.580 00:11:30.580 --- 10.0.0.1 ping statistics --- 00:11:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.580 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2296087 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2296087 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2296087 ']' 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.580 23:14:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:30.837 [2024-07-15 23:14:45.919765] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:30.837 [2024-07-15 23:14:45.919847] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.837 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.837 [2024-07-15 23:14:45.984136] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.837 [2024-07-15 23:14:46.094713] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.837 [2024-07-15 23:14:46.094791] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.837 [2024-07-15 23:14:46.094805] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.838 [2024-07-15 23:14:46.094817] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.838 [2024-07-15 23:14:46.094826] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.838 [2024-07-15 23:14:46.094883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.838 [2024-07-15 23:14:46.094940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.838 [2024-07-15 23:14:46.094943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.094 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:31.351 [2024-07-15 23:14:46.443031] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.351 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.608 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:31.608 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.865 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:31.865 23:14:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:32.122 23:14:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:32.380 23:14:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bc07ca11-43c5-4417-8ee0-8a5cc48d3480 00:11:32.380 23:14:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc07ca11-43c5-4417-8ee0-8a5cc48d3480 lvol 20 00:11:32.637 23:14:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3802ba54-10bc-4548-8d81-889a5bf50d25 00:11:32.637 23:14:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:32.895 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3802ba54-10bc-4548-8d81-889a5bf50d25 00:11:33.152 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:33.409 [2024-07-15 23:14:48.484563] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.409 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.681 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2296510 00:11:33.681 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:33.681 23:14:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:33.681 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.629 23:14:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3802ba54-10bc-4548-8d81-889a5bf50d25 MY_SNAPSHOT 00:11:34.886 23:14:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f100291-aa18-4d07-b7c6-b79987bb9da2 00:11:34.886 23:14:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3802ba54-10bc-4548-8d81-889a5bf50d25 30 00:11:35.452 23:14:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f100291-aa18-4d07-b7c6-b79987bb9da2 MY_CLONE 00:11:35.710 23:14:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8791136f-4637-40bd-a425-191ba1acf9f6 00:11:35.710 23:14:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8791136f-4637-40bd-a425-191ba1acf9f6 00:11:36.275 23:14:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2296510 00:11:44.379 Initializing NVMe Controllers 00:11:44.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:44.379 Controller IO queue size 128, less than required. 00:11:44.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:44.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:44.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:44.379 Initialization complete. Launching workers. 00:11:44.379 ======================================================== 00:11:44.379 Latency(us) 00:11:44.379 Device Information : IOPS MiB/s Average min max 00:11:44.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10540.00 41.17 12148.61 1467.25 88391.05 00:11:44.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10397.20 40.61 12318.60 2085.68 70885.76 00:11:44.379 ======================================================== 00:11:44.379 Total : 20937.20 81.79 12233.03 1467.25 88391.05 00:11:44.379 00:11:44.379 23:14:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:44.379 23:14:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3802ba54-10bc-4548-8d81-889a5bf50d25 00:11:44.637 23:14:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc07ca11-43c5-4417-8ee0-8a5cc48d3480 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.894 rmmod nvme_tcp 00:11:44.894 rmmod nvme_fabrics 00:11:44.894 rmmod nvme_keyring 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2296087 ']' 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2296087 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2296087 ']' 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2296087 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2296087 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2296087' 00:11:44.894 killing process with pid 2296087 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2296087 00:11:44.894 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2296087 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.459 23:15:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.460 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.460 23:15:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:47.356 00:11:47.356 real 0m18.871s 00:11:47.356 user 1m4.537s 00:11:47.356 sys 0m5.726s 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:47.356 ************************************ 00:11:47.356 END TEST nvmf_lvol 00:11:47.356 ************************************ 00:11:47.356 23:15:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:47.356 23:15:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:47.356 23:15:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:47.356 23:15:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.356 23:15:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:47.356 ************************************ 00:11:47.356 START TEST nvmf_lvs_grow 00:11:47.356 ************************************ 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:47.356 * Looking for test storage... 00:11:47.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.356 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.357 23:15:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:49.883 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:49.883 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.883 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:49.884 Found net devices under 0000:84:00.0: cvl_0_0 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:49.884 Found net devices under 0000:84:00.1: cvl_0_1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:11:49.884 00:11:49.884 --- 10.0.0.2 ping statistics --- 00:11:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.884 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:49.884 00:11:49.884 --- 10.0.0.1 ping statistics --- 00:11:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.884 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2299899 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2299899 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2299899 ']' 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.884 23:15:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:49.884 [2024-07-15 23:15:04.870099] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:49.884 [2024-07-15 23:15:04.870180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.884 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.884 [2024-07-15 23:15:04.937969] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.884 [2024-07-15 23:15:05.053370] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.884 [2024-07-15 23:15:05.053432] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.884 [2024-07-15 23:15:05.053456] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.884 [2024-07-15 23:15:05.053472] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.884 [2024-07-15 23:15:05.053483] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.884 [2024-07-15 23:15:05.053533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.818 23:15:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.818 [2024-07-15 23:15:06.078386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.818 ************************************ 00:11:50.818 START TEST lvs_grow_clean 00:11:50.818 ************************************ 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:50.818 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.076 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.076 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.333 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:51.333 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:51.591 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=af18ccf7-b58d-40d4-bd45-26f044a57718 00:11:51.591 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:11:51.591 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:51.848 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:51.848 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:51.848 23:15:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af18ccf7-b58d-40d4-bd45-26f044a57718 lvol 150 00:11:52.106 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 00:11:52.106 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:52.106 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:52.363 [2024-07-15 23:15:07.442962] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:52.363 [2024-07-15 23:15:07.443061] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:52.363 true 00:11:52.363 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:11:52.363 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:52.620 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:52.620 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:52.877 23:15:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 00:11:53.135 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:53.135 [2024-07-15 23:15:08.446100] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.393 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2300879 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2300879 /var/tmp/bdevperf.sock 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2300879 ']' 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:53.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.651 23:15:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:53.651 [2024-07-15 23:15:08.758939] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:11:53.651 [2024-07-15 23:15:08.759022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300879 ] 00:11:53.651 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.652 [2024-07-15 23:15:08.820430] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.652 [2024-07-15 23:15:08.931664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.909 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.909 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:53.909 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:54.166 Nvme0n1 00:11:54.166 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:54.423 [ 00:11:54.423 { 00:11:54.423 "name": "Nvme0n1", 00:11:54.423 "aliases": [ 00:11:54.423 "264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5" 00:11:54.423 ], 00:11:54.423 "product_name": "NVMe disk", 00:11:54.423 "block_size": 4096, 00:11:54.423 "num_blocks": 38912, 00:11:54.423 "uuid": "264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5", 00:11:54.423 "assigned_rate_limits": { 00:11:54.423 "rw_ios_per_sec": 0, 00:11:54.423 "rw_mbytes_per_sec": 0, 00:11:54.423 "r_mbytes_per_sec": 0, 00:11:54.423 "w_mbytes_per_sec": 0 00:11:54.423 }, 00:11:54.423 "claimed": false, 00:11:54.423 "zoned": false, 00:11:54.423 "supported_io_types": { 00:11:54.423 "read": true, 00:11:54.423 "write": true, 00:11:54.423 "unmap": true, 00:11:54.423 "flush": true, 00:11:54.423 "reset": true, 00:11:54.423 "nvme_admin": true, 00:11:54.423 "nvme_io": true, 00:11:54.423 "nvme_io_md": false, 00:11:54.423 "write_zeroes": true, 00:11:54.423 "zcopy": false, 00:11:54.423 "get_zone_info": false, 00:11:54.423 "zone_management": false, 00:11:54.423 "zone_append": false, 00:11:54.423 "compare": true, 00:11:54.423 "compare_and_write": true, 00:11:54.423 "abort": true, 00:11:54.423 "seek_hole": false, 00:11:54.423 "seek_data": false, 00:11:54.423 "copy": true, 00:11:54.423 "nvme_iov_md": false 00:11:54.423 }, 00:11:54.423 "memory_domains": [ 00:11:54.423 { 00:11:54.423 "dma_device_id": "system", 00:11:54.423 "dma_device_type": 1 00:11:54.423 } 00:11:54.423 ], 00:11:54.423 "driver_specific": { 00:11:54.423 "nvme": [ 00:11:54.423 { 00:11:54.423 "trid": { 00:11:54.423 "trtype": "TCP", 00:11:54.423 "adrfam": "IPv4", 00:11:54.423 "traddr": "10.0.0.2", 00:11:54.423 "trsvcid": "4420", 00:11:54.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:54.423 }, 00:11:54.423 "ctrlr_data": { 00:11:54.423 "cntlid": 1, 00:11:54.423 "vendor_id": "0x8086", 00:11:54.423 "model_number": "SPDK bdev Controller", 00:11:54.423 "serial_number": "SPDK0", 00:11:54.423 "firmware_revision": "24.09", 00:11:54.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:54.423 "oacs": { 00:11:54.423 "security": 0, 00:11:54.423 "format": 0, 00:11:54.423 "firmware": 0, 00:11:54.423 "ns_manage": 0 00:11:54.423 }, 00:11:54.423 "multi_ctrlr": true, 00:11:54.423 "ana_reporting": false 00:11:54.423 }, 00:11:54.424 "vs": { 00:11:54.424 "nvme_version": "1.3" 00:11:54.424 }, 00:11:54.424 "ns_data": { 00:11:54.424 "id": 1, 00:11:54.424 "can_share": true 00:11:54.424 } 00:11:54.424 } 00:11:54.424 ], 00:11:54.424 "mp_policy": "active_passive" 00:11:54.424 } 00:11:54.424 } 00:11:54.424 ] 00:11:54.424 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2300988 00:11:54.424 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:54.424 23:15:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:54.424 Running I/O for 10 seconds... 00:11:55.796 Latency(us) 00:11:55.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.796 Nvme0n1 : 1.00 15045.00 58.77 0.00 0.00 0.00 0.00 0.00 00:11:55.796 =================================================================================================================== 00:11:55.796 Total : 15045.00 58.77 0.00 0.00 0.00 0.00 0.00 00:11:55.796 00:11:56.360 23:15:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:11:56.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.617 Nvme0n1 : 2.00 14820.00 57.89 0.00 0.00 0.00 0.00 0.00 00:11:56.617 =================================================================================================================== 00:11:56.617 Total : 14820.00 57.89 0.00 0.00 0.00 0.00 0.00 00:11:56.617 00:11:56.617 true 00:11:56.617 23:15:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:11:56.617 23:15:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:56.875 23:15:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:56.875 23:15:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:56.875 23:15:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2300988 00:11:57.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.438 Nvme0n1 : 3.00 14783.67 57.75 0.00 0.00 0.00 0.00 0.00 00:11:57.438 =================================================================================================================== 00:11:57.438 Total : 14783.67 57.75 0.00 0.00 0.00 0.00 0.00 00:11:57.438 00:11:58.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.807 Nvme0n1 : 4.00 14947.75 58.39 0.00 0.00 0.00 0.00 0.00 00:11:58.807 =================================================================================================================== 00:11:58.807 Total : 14947.75 58.39 0.00 0.00 0.00 0.00 0.00 00:11:58.807 00:11:59.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.740 Nvme0n1 : 5.00 15007.60 58.62 0.00 0.00 0.00 0.00 0.00 00:11:59.740 =================================================================================================================== 00:11:59.740 Total : 15007.60 58.62 0.00 0.00 0.00 0.00 0.00 00:11:59.740 00:12:00.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.672 Nvme0n1 : 6.00 15070.83 58.87 0.00 0.00 0.00 0.00 0.00 00:12:00.672 =================================================================================================================== 00:12:00.672 Total : 15070.83 58.87 0.00 0.00 0.00 0.00 0.00 00:12:00.672 00:12:01.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.604 Nvme0n1 : 7.00 15169.71 59.26 0.00 0.00 0.00 0.00 0.00 00:12:01.604 =================================================================================================================== 00:12:01.604 Total : 15169.71 59.26 0.00 0.00 0.00 0.00 0.00 00:12:01.604 00:12:02.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.559 Nvme0n1 : 8.00 15221.88 59.46 0.00 0.00 0.00 0.00 0.00 00:12:02.559 =================================================================================================================== 00:12:02.560 Total : 15221.88 59.46 0.00 0.00 0.00 0.00 0.00 00:12:02.560 00:12:03.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.491 Nvme0n1 : 9.00 15217.56 59.44 0.00 0.00 0.00 0.00 0.00 00:12:03.491 =================================================================================================================== 00:12:03.491 Total : 15217.56 59.44 0.00 0.00 0.00 0.00 0.00 00:12:03.491 00:12:04.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.423 Nvme0n1 : 10.00 15209.80 59.41 0.00 0.00 0.00 0.00 0.00 00:12:04.423 =================================================================================================================== 00:12:04.423 Total : 15209.80 59.41 0.00 0.00 0.00 0.00 0.00 00:12:04.423 00:12:04.423 00:12:04.423 Latency(us) 00:12:04.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.423 Nvme0n1 : 10.01 15212.52 59.42 0.00 0.00 8408.46 4563.25 18544.26 00:12:04.423 =================================================================================================================== 00:12:04.423 Total : 15212.52 59.42 0.00 0.00 8408.46 4563.25 18544.26 00:12:04.423 0 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2300879 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2300879 ']' 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2300879 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.423 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2300879 00:12:04.681 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:04.681 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:04.681 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2300879' 00:12:04.681 killing process with pid 2300879 00:12:04.681 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2300879 00:12:04.681 Received shutdown signal, test time was about 10.000000 seconds 00:12:04.681 00:12:04.681 Latency(us) 00:12:04.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.681 =================================================================================================================== 00:12:04.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:04.681 23:15:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2300879 00:12:04.940 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.197 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:05.456 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:05.456 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:05.713 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:05.713 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:05.713 23:15:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:05.970 [2024-07-15 23:15:21.031446] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:05.970 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:06.228 request: 00:12:06.228 { 00:12:06.228 "uuid": "af18ccf7-b58d-40d4-bd45-26f044a57718", 00:12:06.228 "method": "bdev_lvol_get_lvstores", 00:12:06.228 "req_id": 1 00:12:06.228 } 00:12:06.228 Got JSON-RPC error response 00:12:06.228 response: 00:12:06.228 { 00:12:06.228 "code": -19, 00:12:06.228 "message": "No such device" 00:12:06.228 } 00:12:06.228 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:06.228 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.228 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.228 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.228 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.486 aio_bdev 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:06.486 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:06.744 23:15:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 -t 2000 00:12:07.002 [ 00:12:07.002 { 00:12:07.002 "name": "264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5", 00:12:07.002 "aliases": [ 00:12:07.002 "lvs/lvol" 00:12:07.002 ], 00:12:07.002 "product_name": "Logical Volume", 00:12:07.002 "block_size": 4096, 00:12:07.002 "num_blocks": 38912, 00:12:07.002 "uuid": "264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5", 00:12:07.002 "assigned_rate_limits": { 00:12:07.002 "rw_ios_per_sec": 0, 00:12:07.002 "rw_mbytes_per_sec": 0, 00:12:07.002 "r_mbytes_per_sec": 0, 00:12:07.002 "w_mbytes_per_sec": 0 00:12:07.002 }, 00:12:07.002 "claimed": false, 00:12:07.002 "zoned": false, 00:12:07.002 "supported_io_types": { 00:12:07.002 "read": true, 00:12:07.002 "write": true, 00:12:07.002 "unmap": true, 00:12:07.002 "flush": false, 00:12:07.002 "reset": true, 00:12:07.002 "nvme_admin": false, 00:12:07.002 "nvme_io": false, 00:12:07.002 "nvme_io_md": false, 00:12:07.002 "write_zeroes": true, 00:12:07.002 "zcopy": false, 00:12:07.002 "get_zone_info": false, 00:12:07.002 "zone_management": false, 00:12:07.002 "zone_append": false, 00:12:07.002 "compare": false, 00:12:07.002 "compare_and_write": false, 00:12:07.002 "abort": false, 00:12:07.002 "seek_hole": true, 00:12:07.002 "seek_data": true, 00:12:07.002 "copy": false, 00:12:07.002 "nvme_iov_md": false 00:12:07.002 }, 00:12:07.002 "driver_specific": { 00:12:07.002 "lvol": { 00:12:07.002 "lvol_store_uuid": "af18ccf7-b58d-40d4-bd45-26f044a57718", 00:12:07.002 "base_bdev": "aio_bdev", 00:12:07.002 "thin_provision": false, 00:12:07.002 "num_allocated_clusters": 38, 00:12:07.002 "snapshot": false, 00:12:07.002 "clone": false, 00:12:07.002 "esnap_clone": false 00:12:07.002 } 00:12:07.002 } 00:12:07.002 } 00:12:07.002 ] 00:12:07.002 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:07.002 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:07.002 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:07.260 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:07.260 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:07.260 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:07.260 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:07.260 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 264f8d2a-a8a4-4fb5-bb44-85a9e4b08ad5 00:12:07.517 23:15:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af18ccf7-b58d-40d4-bd45-26f044a57718 00:12:08.083 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:08.083 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:08.340 00:12:08.340 real 0m17.283s 00:12:08.340 user 0m16.682s 00:12:08.340 sys 0m1.920s 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:08.340 ************************************ 00:12:08.340 END TEST lvs_grow_clean 00:12:08.340 ************************************ 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.340 ************************************ 00:12:08.340 START TEST lvs_grow_dirty 00:12:08.340 ************************************ 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:08.340 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:08.598 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:08.598 23:15:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:08.855 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:08.855 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:08.855 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:09.112 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:09.112 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:09.112 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 lvol 150 00:12:09.369 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ca5c49a9-c884-471b-b052-38e287e9e412 00:12:09.369 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.369 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:09.626 [2024-07-15 23:15:24.797098] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:09.626 [2024-07-15 23:15:24.797180] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:09.626 true 00:12:09.626 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:09.626 23:15:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:09.884 23:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:09.884 23:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:10.142 23:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca5c49a9-c884-471b-b052-38e287e9e412 00:12:10.400 23:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:10.658 [2024-07-15 23:15:25.804149] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.658 23:15:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2303024 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2303024 /var/tmp/bdevperf.sock 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2303024 ']' 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:10.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.916 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 [2024-07-15 23:15:26.102285] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:10.916 [2024-07-15 23:15:26.102355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303024 ] 00:12:10.916 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.916 [2024-07-15 23:15:26.164372] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.174 [2024-07-15 23:15:26.282402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.174 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.174 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:11.174 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:11.739 Nvme0n1 00:12:11.739 23:15:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:11.997 [ 00:12:11.997 { 00:12:11.997 "name": "Nvme0n1", 00:12:11.997 "aliases": [ 00:12:11.997 "ca5c49a9-c884-471b-b052-38e287e9e412" 00:12:11.997 ], 00:12:11.997 "product_name": "NVMe disk", 00:12:11.997 "block_size": 4096, 00:12:11.997 "num_blocks": 38912, 00:12:11.997 "uuid": "ca5c49a9-c884-471b-b052-38e287e9e412", 00:12:11.997 "assigned_rate_limits": { 00:12:11.997 "rw_ios_per_sec": 0, 00:12:11.997 "rw_mbytes_per_sec": 0, 00:12:11.997 "r_mbytes_per_sec": 0, 00:12:11.997 "w_mbytes_per_sec": 0 00:12:11.997 }, 00:12:11.997 "claimed": false, 00:12:11.997 "zoned": false, 00:12:11.997 "supported_io_types": { 00:12:11.997 "read": true, 00:12:11.997 "write": true, 00:12:11.997 "unmap": true, 00:12:11.997 "flush": true, 00:12:11.997 "reset": true, 00:12:11.997 "nvme_admin": true, 00:12:11.997 "nvme_io": true, 00:12:11.997 "nvme_io_md": false, 00:12:11.997 "write_zeroes": true, 00:12:11.997 "zcopy": false, 00:12:11.997 "get_zone_info": false, 00:12:11.997 "zone_management": false, 00:12:11.997 "zone_append": false, 00:12:11.997 "compare": true, 00:12:11.997 "compare_and_write": true, 00:12:11.997 "abort": true, 00:12:11.997 "seek_hole": false, 00:12:11.997 "seek_data": false, 00:12:11.997 "copy": true, 00:12:11.997 "nvme_iov_md": false 00:12:11.997 }, 00:12:11.997 "memory_domains": [ 00:12:11.997 { 00:12:11.997 "dma_device_id": "system", 00:12:11.997 "dma_device_type": 1 00:12:11.997 } 00:12:11.997 ], 00:12:11.997 "driver_specific": { 00:12:11.997 "nvme": [ 00:12:11.997 { 00:12:11.997 "trid": { 00:12:11.997 "trtype": "TCP", 00:12:11.997 "adrfam": "IPv4", 00:12:11.997 "traddr": "10.0.0.2", 00:12:11.997 "trsvcid": "4420", 00:12:11.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:11.997 }, 00:12:11.997 "ctrlr_data": { 00:12:11.997 "cntlid": 1, 00:12:11.997 "vendor_id": "0x8086", 00:12:11.997 "model_number": "SPDK bdev Controller", 00:12:11.997 "serial_number": "SPDK0", 00:12:11.997 "firmware_revision": "24.09", 00:12:11.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:11.997 "oacs": { 00:12:11.997 "security": 0, 00:12:11.997 "format": 0, 00:12:11.997 "firmware": 0, 00:12:11.997 "ns_manage": 0 00:12:11.997 }, 00:12:11.997 "multi_ctrlr": true, 00:12:11.997 "ana_reporting": false 00:12:11.997 }, 00:12:11.997 "vs": { 00:12:11.997 "nvme_version": "1.3" 00:12:11.997 }, 00:12:11.997 "ns_data": { 00:12:11.997 "id": 1, 00:12:11.997 "can_share": true 00:12:11.997 } 00:12:11.997 } 00:12:11.997 ], 00:12:11.997 "mp_policy": "active_passive" 00:12:11.997 } 00:12:11.997 } 00:12:11.997 ] 00:12:11.997 23:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2303156 00:12:11.998 23:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:11.998 23:15:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:11.998 Running I/O for 10 seconds... 00:12:12.929 Latency(us) 00:12:12.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.930 Nvme0n1 : 1.00 14378.00 56.16 0.00 0.00 0.00 0.00 0.00 00:12:12.930 =================================================================================================================== 00:12:12.930 Total : 14378.00 56.16 0.00 0.00 0.00 0.00 0.00 00:12:12.930 00:12:13.864 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:14.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.122 Nvme0n1 : 2.00 14434.50 56.38 0.00 0.00 0.00 0.00 0.00 00:12:14.122 =================================================================================================================== 00:12:14.122 Total : 14434.50 56.38 0.00 0.00 0.00 0.00 0.00 00:12:14.122 00:12:14.122 true 00:12:14.122 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:14.122 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:14.689 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:14.689 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:14.689 23:15:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2303156 00:12:14.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.947 Nvme0n1 : 3.00 14488.67 56.60 0.00 0.00 0.00 0.00 0.00 00:12:14.947 =================================================================================================================== 00:12:14.947 Total : 14488.67 56.60 0.00 0.00 0.00 0.00 0.00 00:12:14.947 00:12:16.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.321 Nvme0n1 : 4.00 14572.50 56.92 0.00 0.00 0.00 0.00 0.00 00:12:16.321 =================================================================================================================== 00:12:16.321 Total : 14572.50 56.92 0.00 0.00 0.00 0.00 0.00 00:12:16.321 00:12:16.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.888 Nvme0n1 : 5.00 14833.80 57.94 0.00 0.00 0.00 0.00 0.00 00:12:16.888 =================================================================================================================== 00:12:16.888 Total : 14833.80 57.94 0.00 0.00 0.00 0.00 0.00 00:12:16.888 00:12:18.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.264 Nvme0n1 : 6.00 14857.00 58.04 0.00 0.00 0.00 0.00 0.00 00:12:18.264 =================================================================================================================== 00:12:18.264 Total : 14857.00 58.04 0.00 0.00 0.00 0.00 0.00 00:12:18.264 00:12:19.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.198 Nvme0n1 : 7.00 14965.86 58.46 0.00 0.00 0.00 0.00 0.00 00:12:19.198 =================================================================================================================== 00:12:19.198 Total : 14965.86 58.46 0.00 0.00 0.00 0.00 0.00 00:12:19.198 00:12:20.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.132 Nvme0n1 : 8.00 14982.88 58.53 0.00 0.00 0.00 0.00 0.00 00:12:20.132 =================================================================================================================== 00:12:20.132 Total : 14982.88 58.53 0.00 0.00 0.00 0.00 0.00 00:12:20.132 00:12:21.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.066 Nvme0n1 : 9.00 15012.78 58.64 0.00 0.00 0.00 0.00 0.00 00:12:21.066 =================================================================================================================== 00:12:21.066 Total : 15012.78 58.64 0.00 0.00 0.00 0.00 0.00 00:12:21.066 00:12:21.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.999 Nvme0n1 : 10.00 15086.90 58.93 0.00 0.00 0.00 0.00 0.00 00:12:21.999 =================================================================================================================== 00:12:21.999 Total : 15086.90 58.93 0.00 0.00 0.00 0.00 0.00 00:12:21.999 00:12:21.999 00:12:21.999 Latency(us) 00:12:21.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.999 Nvme0n1 : 10.00 15092.57 58.96 0.00 0.00 8476.31 2220.94 16602.45 00:12:21.999 =================================================================================================================== 00:12:21.999 Total : 15092.57 58.96 0.00 0.00 8476.31 2220.94 16602.45 00:12:21.999 0 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2303024 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2303024 ']' 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2303024 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2303024 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2303024' 00:12:21.999 killing process with pid 2303024 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2303024 00:12:21.999 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.999 00:12:21.999 Latency(us) 00:12:21.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.999 =================================================================================================================== 00:12:21.999 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:21.999 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2303024 00:12:22.255 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.512 23:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:22.769 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:22.769 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2299899 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2299899 00:12:23.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2299899 Killed "${NVMF_APP[@]}" "$@" 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2304493 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2304493 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2304493 ']' 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.026 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.284 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.284 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:23.284 [2024-07-15 23:15:38.389182] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:23.284 [2024-07-15 23:15:38.389266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.284 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.284 [2024-07-15 23:15:38.454589] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.284 [2024-07-15 23:15:38.563882] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.284 [2024-07-15 23:15:38.563944] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.284 [2024-07-15 23:15:38.563958] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.284 [2024-07-15 23:15:38.563969] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.284 [2024-07-15 23:15:38.563984] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.284 [2024-07-15 23:15:38.564011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.541 23:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:23.798 [2024-07-15 23:15:38.982516] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:23.798 [2024-07-15 23:15:38.982652] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:23.798 [2024-07-15 23:15:38.982699] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ca5c49a9-c884-471b-b052-38e287e9e412 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ca5c49a9-c884-471b-b052-38e287e9e412 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:23.798 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:24.073 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca5c49a9-c884-471b-b052-38e287e9e412 -t 2000 00:12:24.335 [ 00:12:24.335 { 00:12:24.335 "name": "ca5c49a9-c884-471b-b052-38e287e9e412", 00:12:24.335 "aliases": [ 00:12:24.335 "lvs/lvol" 00:12:24.335 ], 00:12:24.335 "product_name": "Logical Volume", 00:12:24.335 "block_size": 4096, 00:12:24.335 "num_blocks": 38912, 00:12:24.335 "uuid": "ca5c49a9-c884-471b-b052-38e287e9e412", 00:12:24.335 "assigned_rate_limits": { 00:12:24.335 "rw_ios_per_sec": 0, 00:12:24.335 "rw_mbytes_per_sec": 0, 00:12:24.335 "r_mbytes_per_sec": 0, 00:12:24.335 "w_mbytes_per_sec": 0 00:12:24.335 }, 00:12:24.335 "claimed": false, 00:12:24.335 "zoned": false, 00:12:24.335 "supported_io_types": { 00:12:24.335 "read": true, 00:12:24.335 "write": true, 00:12:24.335 "unmap": true, 00:12:24.335 "flush": false, 00:12:24.335 "reset": true, 00:12:24.335 "nvme_admin": false, 00:12:24.335 "nvme_io": false, 00:12:24.335 "nvme_io_md": false, 00:12:24.335 "write_zeroes": true, 00:12:24.335 "zcopy": false, 00:12:24.335 "get_zone_info": false, 00:12:24.335 "zone_management": false, 00:12:24.335 "zone_append": false, 00:12:24.335 "compare": false, 00:12:24.335 "compare_and_write": false, 00:12:24.335 "abort": false, 00:12:24.335 "seek_hole": true, 00:12:24.335 "seek_data": true, 00:12:24.335 "copy": false, 00:12:24.335 "nvme_iov_md": false 00:12:24.335 }, 00:12:24.335 "driver_specific": { 00:12:24.335 "lvol": { 00:12:24.335 "lvol_store_uuid": "6a1abd86-0d31-450d-bb5f-1a29e9345f90", 00:12:24.335 "base_bdev": "aio_bdev", 00:12:24.335 "thin_provision": false, 00:12:24.335 "num_allocated_clusters": 38, 00:12:24.335 "snapshot": false, 00:12:24.335 "clone": false, 00:12:24.335 "esnap_clone": false 00:12:24.335 } 00:12:24.335 } 00:12:24.335 } 00:12:24.335 ] 00:12:24.335 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:24.335 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:24.335 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:24.592 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:24.592 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:24.592 23:15:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:24.850 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:24.850 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.108 [2024-07-15 23:15:40.271807] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:25.108 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:25.366 request: 00:12:25.366 { 00:12:25.366 "uuid": "6a1abd86-0d31-450d-bb5f-1a29e9345f90", 00:12:25.366 "method": "bdev_lvol_get_lvstores", 00:12:25.366 "req_id": 1 00:12:25.366 } 00:12:25.366 Got JSON-RPC error response 00:12:25.366 response: 00:12:25.366 { 00:12:25.366 "code": -19, 00:12:25.366 "message": "No such device" 00:12:25.366 } 00:12:25.366 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:25.366 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.366 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.366 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.366 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:25.623 aio_bdev 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ca5c49a9-c884-471b-b052-38e287e9e412 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ca5c49a9-c884-471b-b052-38e287e9e412 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:25.623 23:15:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:25.880 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca5c49a9-c884-471b-b052-38e287e9e412 -t 2000 00:12:26.138 [ 00:12:26.138 { 00:12:26.138 "name": "ca5c49a9-c884-471b-b052-38e287e9e412", 00:12:26.138 "aliases": [ 00:12:26.138 "lvs/lvol" 00:12:26.138 ], 00:12:26.138 "product_name": "Logical Volume", 00:12:26.138 "block_size": 4096, 00:12:26.138 "num_blocks": 38912, 00:12:26.138 "uuid": "ca5c49a9-c884-471b-b052-38e287e9e412", 00:12:26.138 "assigned_rate_limits": { 00:12:26.138 "rw_ios_per_sec": 0, 00:12:26.138 "rw_mbytes_per_sec": 0, 00:12:26.138 "r_mbytes_per_sec": 0, 00:12:26.138 "w_mbytes_per_sec": 0 00:12:26.138 }, 00:12:26.138 "claimed": false, 00:12:26.138 "zoned": false, 00:12:26.138 "supported_io_types": { 00:12:26.138 "read": true, 00:12:26.138 "write": true, 00:12:26.138 "unmap": true, 00:12:26.138 "flush": false, 00:12:26.138 "reset": true, 00:12:26.138 "nvme_admin": false, 00:12:26.138 "nvme_io": false, 00:12:26.138 "nvme_io_md": false, 00:12:26.138 "write_zeroes": true, 00:12:26.138 "zcopy": false, 00:12:26.138 "get_zone_info": false, 00:12:26.138 "zone_management": false, 00:12:26.138 "zone_append": false, 00:12:26.138 "compare": false, 00:12:26.138 "compare_and_write": false, 00:12:26.138 "abort": false, 00:12:26.138 "seek_hole": true, 00:12:26.138 "seek_data": true, 00:12:26.138 "copy": false, 00:12:26.138 "nvme_iov_md": false 00:12:26.138 }, 00:12:26.138 "driver_specific": { 00:12:26.138 "lvol": { 00:12:26.138 "lvol_store_uuid": "6a1abd86-0d31-450d-bb5f-1a29e9345f90", 00:12:26.138 "base_bdev": "aio_bdev", 00:12:26.138 "thin_provision": false, 00:12:26.138 "num_allocated_clusters": 38, 00:12:26.138 "snapshot": false, 00:12:26.138 "clone": false, 00:12:26.138 "esnap_clone": false 00:12:26.138 } 00:12:26.138 } 00:12:26.138 } 00:12:26.138 ] 00:12:26.138 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:26.138 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:26.138 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:26.396 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:26.396 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:26.396 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:26.653 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:26.654 23:15:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca5c49a9-c884-471b-b052-38e287e9e412 00:12:26.911 23:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a1abd86-0d31-450d-bb5f-1a29e9345f90 00:12:27.171 23:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.429 00:12:27.429 real 0m19.104s 00:12:27.429 user 0m48.582s 00:12:27.429 sys 0m4.976s 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.429 ************************************ 00:12:27.429 END TEST lvs_grow_dirty 00:12:27.429 ************************************ 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:27.429 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:27.430 nvmf_trace.0 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.430 rmmod nvme_tcp 00:12:27.430 rmmod nvme_fabrics 00:12:27.430 rmmod nvme_keyring 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2304493 ']' 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2304493 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2304493 ']' 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2304493 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2304493 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2304493' 00:12:27.430 killing process with pid 2304493 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2304493 00:12:27.430 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2304493 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.688 23:15:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.212 23:15:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.212 00:12:30.212 real 0m42.460s 00:12:30.212 user 1m11.212s 00:12:30.212 sys 0m8.822s 00:12:30.212 23:15:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.212 23:15:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 ************************************ 00:12:30.212 END TEST nvmf_lvs_grow 00:12:30.212 ************************************ 00:12:30.212 23:15:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:30.212 23:15:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:30.212 23:15:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.212 23:15:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.212 23:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 ************************************ 00:12:30.212 START TEST nvmf_bdev_io_wait 00:12:30.212 ************************************ 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:30.212 * Looking for test storage... 00:12:30.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.212 23:15:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:32.110 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:32.110 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:32.110 Found net devices under 0000:84:00.0: cvl_0_0 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:32.110 Found net devices under 0000:84:00.1: cvl_0_1 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:12:32.110 00:12:32.110 --- 10.0.0.2 ping statistics --- 00:12:32.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.110 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:32.110 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:32.110 00:12:32.110 --- 10.0.0.1 ping statistics --- 00:12:32.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.111 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2307026 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2307026 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2307026 ']' 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.111 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.111 [2024-07-15 23:15:47.399136] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:32.111 [2024-07-15 23:15:47.399231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.369 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.369 [2024-07-15 23:15:47.472115] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.369 [2024-07-15 23:15:47.591390] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.369 [2024-07-15 23:15:47.591451] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.369 [2024-07-15 23:15:47.591468] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.369 [2024-07-15 23:15:47.591481] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.369 [2024-07-15 23:15:47.591492] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.369 [2024-07-15 23:15:47.591575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.369 [2024-07-15 23:15:47.591645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.369 [2024-07-15 23:15:47.591754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.369 [2024-07-15 23:15:47.591757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.369 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 [2024-07-15 23:15:47.738214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 Malloc0 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.627 [2024-07-15 23:15:47.797076] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2307059 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2307061 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.627 { 00:12:32.627 "params": { 00:12:32.627 "name": "Nvme$subsystem", 00:12:32.627 "trtype": "$TEST_TRANSPORT", 00:12:32.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.627 "adrfam": "ipv4", 00:12:32.627 "trsvcid": "$NVMF_PORT", 00:12:32.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.627 "hdgst": ${hdgst:-false}, 00:12:32.627 "ddgst": ${ddgst:-false} 00:12:32.627 }, 00:12:32.627 "method": "bdev_nvme_attach_controller" 00:12:32.627 } 00:12:32.627 EOF 00:12:32.627 )") 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2307063 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.627 { 00:12:32.627 "params": { 00:12:32.627 "name": "Nvme$subsystem", 00:12:32.627 "trtype": "$TEST_TRANSPORT", 00:12:32.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.627 "adrfam": "ipv4", 00:12:32.627 "trsvcid": "$NVMF_PORT", 00:12:32.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.627 "hdgst": ${hdgst:-false}, 00:12:32.627 "ddgst": ${ddgst:-false} 00:12:32.627 }, 00:12:32.627 "method": "bdev_nvme_attach_controller" 00:12:32.627 } 00:12:32.627 EOF 00:12:32.627 )") 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2307066 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.627 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.627 { 00:12:32.627 "params": { 00:12:32.627 "name": "Nvme$subsystem", 00:12:32.627 "trtype": "$TEST_TRANSPORT", 00:12:32.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.627 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "$NVMF_PORT", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.628 "hdgst": ${hdgst:-false}, 00:12:32.628 "ddgst": ${ddgst:-false} 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 } 00:12:32.628 EOF 00:12:32.628 )") 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.628 { 00:12:32.628 "params": { 00:12:32.628 "name": "Nvme$subsystem", 00:12:32.628 "trtype": "$TEST_TRANSPORT", 00:12:32.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.628 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "$NVMF_PORT", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.628 "hdgst": ${hdgst:-false}, 00:12:32.628 "ddgst": ${ddgst:-false} 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 } 00:12:32.628 EOF 00:12:32.628 )") 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2307059 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.628 "params": { 00:12:32.628 "name": "Nvme1", 00:12:32.628 "trtype": "tcp", 00:12:32.628 "traddr": "10.0.0.2", 00:12:32.628 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "4420", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.628 "hdgst": false, 00:12:32.628 "ddgst": false 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 }' 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.628 "params": { 00:12:32.628 "name": "Nvme1", 00:12:32.628 "trtype": "tcp", 00:12:32.628 "traddr": "10.0.0.2", 00:12:32.628 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "4420", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.628 "hdgst": false, 00:12:32.628 "ddgst": false 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 }' 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.628 "params": { 00:12:32.628 "name": "Nvme1", 00:12:32.628 "trtype": "tcp", 00:12:32.628 "traddr": "10.0.0.2", 00:12:32.628 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "4420", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.628 "hdgst": false, 00:12:32.628 "ddgst": false 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 }' 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.628 23:15:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.628 "params": { 00:12:32.628 "name": "Nvme1", 00:12:32.628 "trtype": "tcp", 00:12:32.628 "traddr": "10.0.0.2", 00:12:32.628 "adrfam": "ipv4", 00:12:32.628 "trsvcid": "4420", 00:12:32.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.628 "hdgst": false, 00:12:32.628 "ddgst": false 00:12:32.628 }, 00:12:32.628 "method": "bdev_nvme_attach_controller" 00:12:32.628 }' 00:12:32.628 [2024-07-15 23:15:47.844282] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:32.628 [2024-07-15 23:15:47.844282] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:32.628 [2024-07-15 23:15:47.844382] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 23:15:47.844381] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:32.628 --proc-type=auto ] 00:12:32.628 [2024-07-15 23:15:47.845153] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:32.628 [2024-07-15 23:15:47.845154] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:32.628 [2024-07-15 23:15:47.845243] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 23:15:47.845243] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:32.628 --proc-type=auto ] 00:12:32.628 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.886 [2024-07-15 23:15:48.024292] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.886 [2024-07-15 23:15:48.125538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.886 [2024-07-15 23:15:48.131710] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.143 [2024-07-15 23:15:48.206974] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.143 [2024-07-15 23:15:48.235512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:33.143 [2024-07-15 23:15:48.285138] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.143 [2024-07-15 23:15:48.304312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:33.143 [2024-07-15 23:15:48.379652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:33.143 Running I/O for 1 seconds... 00:12:33.400 Running I/O for 1 seconds... 00:12:33.400 Running I/O for 1 seconds... 00:12:33.400 Running I/O for 1 seconds... 00:12:34.333 00:12:34.333 Latency(us) 00:12:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.333 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:34.333 Nvme1n1 : 1.02 6991.95 27.31 0.00 0.00 18097.39 9417.77 28544.57 00:12:34.333 =================================================================================================================== 00:12:34.333 Total : 6991.95 27.31 0.00 0.00 18097.39 9417.77 28544.57 00:12:34.333 00:12:34.333 Latency(us) 00:12:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.333 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:34.333 Nvme1n1 : 1.01 8886.65 34.71 0.00 0.00 14333.10 8883.77 26796.94 00:12:34.333 =================================================================================================================== 00:12:34.333 Total : 8886.65 34.71 0.00 0.00 14333.10 8883.77 26796.94 00:12:34.333 00:12:34.333 Latency(us) 00:12:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.333 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:34.333 Nvme1n1 : 1.00 7248.46 28.31 0.00 0.00 17609.03 4854.52 40195.41 00:12:34.333 =================================================================================================================== 00:12:34.333 Total : 7248.46 28.31 0.00 0.00 17609.03 4854.52 40195.41 00:12:34.591 00:12:34.591 Latency(us) 00:12:34.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.591 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:34.591 Nvme1n1 : 1.00 198213.99 774.27 0.00 0.00 643.36 285.20 782.79 00:12:34.591 =================================================================================================================== 00:12:34.591 Total : 198213.99 774.27 0.00 0.00 643.36 285.20 782.79 00:12:34.591 23:15:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2307061 00:12:34.591 23:15:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2307063 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2307066 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.849 rmmod nvme_tcp 00:12:34.849 rmmod nvme_fabrics 00:12:34.849 rmmod nvme_keyring 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2307026 ']' 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2307026 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2307026 ']' 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2307026 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307026 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307026' 00:12:34.849 killing process with pid 2307026 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2307026 00:12:34.849 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2307026 00:12:35.108 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.108 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.108 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.108 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.108 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.109 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.109 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.109 23:15:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.632 23:15:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.632 00:12:37.632 real 0m7.328s 00:12:37.632 user 0m17.556s 00:12:37.632 sys 0m3.365s 00:12:37.632 23:15:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.632 23:15:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.632 ************************************ 00:12:37.632 END TEST nvmf_bdev_io_wait 00:12:37.632 ************************************ 00:12:37.632 23:15:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.632 23:15:52 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:37.632 23:15:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.632 23:15:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.632 23:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.632 ************************************ 00:12:37.632 START TEST nvmf_queue_depth 00:12:37.632 ************************************ 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:37.632 * Looking for test storage... 00:12:37.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.632 23:15:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:39.528 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:39.528 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:39.528 Found net devices under 0000:84:00.0: cvl_0_0 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:39.528 Found net devices under 0000:84:00.1: cvl_0_1 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:12:39.528 00:12:39.528 --- 10.0.0.2 ping statistics --- 00:12:39.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.528 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:12:39.528 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:39.529 00:12:39.529 --- 10.0.0.1 ping statistics --- 00:12:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.529 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2309296 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2309296 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2309296 ']' 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.529 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.529 [2024-07-15 23:15:54.623192] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:39.529 [2024-07-15 23:15:54.623281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.529 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.529 [2024-07-15 23:15:54.688980] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.529 [2024-07-15 23:15:54.800065] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.529 [2024-07-15 23:15:54.800139] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.529 [2024-07-15 23:15:54.800153] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.529 [2024-07-15 23:15:54.800164] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.529 [2024-07-15 23:15:54.800173] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.529 [2024-07-15 23:15:54.800209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.786 [2024-07-15 23:15:54.947619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.786 23:15:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 Malloc0 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.787 23:15:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 [2024-07-15 23:15:55.006287] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2309325 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2309325 /var/tmp/bdevperf.sock 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2309325 ']' 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.787 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 [2024-07-15 23:15:55.052774] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:39.787 [2024-07-15 23:15:55.052850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309325 ] 00:12:39.787 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.045 [2024-07-15 23:15:55.112888] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.045 [2024-07-15 23:15:55.225087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.045 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.045 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:40.045 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:40.045 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.045 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.303 NVMe0n1 00:12:40.303 23:15:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.303 23:15:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:40.303 Running I/O for 10 seconds... 00:12:52.574 00:12:52.574 Latency(us) 00:12:52.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.574 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:52.574 Verification LBA range: start 0x0 length 0x4000 00:12:52.574 NVMe0n1 : 10.07 8630.02 33.71 0.00 0.00 118176.17 20874.43 78449.02 00:12:52.574 =================================================================================================================== 00:12:52.574 Total : 8630.02 33.71 0.00 0.00 118176.17 20874.43 78449.02 00:12:52.574 0 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2309325 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2309325 ']' 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2309325 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309325 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309325' 00:12:52.574 killing process with pid 2309325 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2309325 00:12:52.574 Received shutdown signal, test time was about 10.000000 seconds 00:12:52.574 00:12:52.574 Latency(us) 00:12:52.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.574 =================================================================================================================== 00:12:52.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2309325 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.574 23:16:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.574 rmmod nvme_tcp 00:12:52.574 rmmod nvme_fabrics 00:12:52.574 rmmod nvme_keyring 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2309296 ']' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2309296 ']' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309296' 00:12:52.574 killing process with pid 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2309296 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.574 23:16:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.141 23:16:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.141 00:12:53.141 real 0m15.981s 00:12:53.141 user 0m22.374s 00:12:53.141 sys 0m3.174s 00:12:53.141 23:16:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.141 23:16:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:53.141 ************************************ 00:12:53.141 END TEST nvmf_queue_depth 00:12:53.141 ************************************ 00:12:53.399 23:16:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.399 23:16:08 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:53.399 23:16:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.399 23:16:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.399 23:16:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.399 ************************************ 00:12:53.399 START TEST nvmf_target_multipath 00:12:53.399 ************************************ 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:53.399 * Looking for test storage... 00:12:53.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.399 23:16:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.400 23:16:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:55.295 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:55.295 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:55.295 Found net devices under 0000:84:00.0: cvl_0_0 00:12:55.295 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:55.296 Found net devices under 0000:84:00.1: cvl_0_1 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.296 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.570 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.570 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.570 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:12:55.570 00:12:55.570 --- 10.0.0.2 ping statistics --- 00:12:55.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.571 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:55.571 00:12:55.571 --- 10.0.0.1 ping statistics --- 00:12:55.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.571 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:55.571 only one NIC for nvmf test 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.571 rmmod nvme_tcp 00:12:55.571 rmmod nvme_fabrics 00:12:55.571 rmmod nvme_keyring 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.571 23:16:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.467 00:12:57.467 real 0m4.279s 00:12:57.467 user 0m0.827s 00:12:57.467 sys 0m1.445s 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.467 23:16:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.467 ************************************ 00:12:57.467 END TEST nvmf_target_multipath 00:12:57.467 ************************************ 00:12:57.725 23:16:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.725 23:16:12 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:57.725 23:16:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.725 23:16:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.725 23:16:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.725 ************************************ 00:12:57.725 START TEST nvmf_zcopy 00:12:57.725 ************************************ 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:57.725 * Looking for test storage... 00:12:57.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.725 23:16:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.726 23:16:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:59.627 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:59.627 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:59.627 Found net devices under 0000:84:00.0: cvl_0_0 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:59.627 Found net devices under 0000:84:00.1: cvl_0_1 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.627 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.628 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:12:59.888 00:12:59.888 --- 10.0.0.2 ping statistics --- 00:12:59.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.888 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:59.888 00:12:59.888 --- 10.0.0.1 ping statistics --- 00:12:59.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.888 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2314529 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2314529 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2314529 ']' 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.888 23:16:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.888 [2024-07-15 23:16:15.026309] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:12:59.888 [2024-07-15 23:16:15.026409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.888 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.888 [2024-07-15 23:16:15.095631] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.146 [2024-07-15 23:16:15.211269] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.146 [2024-07-15 23:16:15.211327] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.146 [2024-07-15 23:16:15.211354] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.146 [2024-07-15 23:16:15.211367] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.146 [2024-07-15 23:16:15.211379] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.146 [2024-07-15 23:16:15.211410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.710 [2024-07-15 23:16:15.992356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.710 23:16:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.710 [2024-07-15 23:16:16.008502] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.710 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.967 malloc0 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.967 { 00:13:00.967 "params": { 00:13:00.967 "name": "Nvme$subsystem", 00:13:00.967 "trtype": "$TEST_TRANSPORT", 00:13:00.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.967 "adrfam": "ipv4", 00:13:00.967 "trsvcid": "$NVMF_PORT", 00:13:00.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.967 "hdgst": ${hdgst:-false}, 00:13:00.967 "ddgst": ${ddgst:-false} 00:13:00.967 }, 00:13:00.967 "method": "bdev_nvme_attach_controller" 00:13:00.967 } 00:13:00.967 EOF 00:13:00.967 )") 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:00.967 23:16:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.967 "params": { 00:13:00.967 "name": "Nvme1", 00:13:00.967 "trtype": "tcp", 00:13:00.967 "traddr": "10.0.0.2", 00:13:00.967 "adrfam": "ipv4", 00:13:00.967 "trsvcid": "4420", 00:13:00.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.967 "hdgst": false, 00:13:00.967 "ddgst": false 00:13:00.967 }, 00:13:00.967 "method": "bdev_nvme_attach_controller" 00:13:00.967 }' 00:13:00.967 [2024-07-15 23:16:16.082982] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:13:00.967 [2024-07-15 23:16:16.083072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314683 ] 00:13:00.967 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.967 [2024-07-15 23:16:16.147152] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.967 [2024-07-15 23:16:16.268214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.225 Running I/O for 10 seconds... 00:13:11.183 00:13:11.183 Latency(us) 00:13:11.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:11.183 Verification LBA range: start 0x0 length 0x1000 00:13:11.183 Nvme1n1 : 10.01 5732.39 44.78 0.00 0.00 22268.40 661.43 32622.36 00:13:11.183 =================================================================================================================== 00:13:11.183 Total : 5732.39 44.78 0.00 0.00 22268.40 661.43 32622.36 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2315873 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:11.749 { 00:13:11.749 "params": { 00:13:11.749 "name": "Nvme$subsystem", 00:13:11.749 "trtype": "$TEST_TRANSPORT", 00:13:11.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.749 "adrfam": "ipv4", 00:13:11.749 "trsvcid": "$NVMF_PORT", 00:13:11.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.749 "hdgst": ${hdgst:-false}, 00:13:11.749 "ddgst": ${ddgst:-false} 00:13:11.749 }, 00:13:11.749 "method": "bdev_nvme_attach_controller" 00:13:11.749 } 00:13:11.749 EOF 00:13:11.749 )") 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:11.749 [2024-07-15 23:16:26.780864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.780911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:11.749 23:16:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.749 "params": { 00:13:11.749 "name": "Nvme1", 00:13:11.749 "trtype": "tcp", 00:13:11.749 "traddr": "10.0.0.2", 00:13:11.749 "adrfam": "ipv4", 00:13:11.749 "trsvcid": "4420", 00:13:11.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.749 "hdgst": false, 00:13:11.749 "ddgst": false 00:13:11.749 }, 00:13:11.749 "method": "bdev_nvme_attach_controller" 00:13:11.749 }' 00:13:11.749 [2024-07-15 23:16:26.788788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.788814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.796801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.796824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.804819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.804841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.812832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.812853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.818759] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:13:11.749 [2024-07-15 23:16:26.818824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315873 ] 00:13:11.749 [2024-07-15 23:16:26.820836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.820858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.828864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.828886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.836881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.836903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.844902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.844923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.749 [2024-07-15 23:16:26.852927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.852949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.860949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.860971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.868969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.869003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.876989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.877038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.882077] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.749 [2024-07-15 23:16:26.885051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.885077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.893125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.893171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.901101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.901131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.909114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.909141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.917119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.917146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.925140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.925166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.933162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.933188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.941185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.941212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.949244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.749 [2024-07-15 23:16:26.949280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.749 [2024-07-15 23:16:26.957261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.957299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:26.965254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.965280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:26.973273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.973300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:26.981294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.981321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:26.989316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.989342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:26.997342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:26.997368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.003152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.750 [2024-07-15 23:16:27.005362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.005387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.013383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.013409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.021437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.021476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.029464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.029505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.037489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.037530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.045517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.045561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.053532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.053576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.750 [2024-07-15 23:16:27.061559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.750 [2024-07-15 23:16:27.061602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.069584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.069626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.077564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.077589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.085621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.085664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.093643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.093688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.101647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.101681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.109650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.109675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.117671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.117696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.125711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.125760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.133723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.133760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.141750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.141791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.149774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.149815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.157807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.157831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.165829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.165853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.173839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.173862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.181869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.181896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.189890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.189914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 Running I/O for 5 seconds... 00:13:12.009 [2024-07-15 23:16:27.197910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.197932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.213469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.213501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.224952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.224979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.237054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.237085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.249100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.249131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.261396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.261428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.273758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.273805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.286139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.286171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.297918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.297945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.309811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.309838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.009 [2024-07-15 23:16:27.321480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.009 [2024-07-15 23:16:27.321523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.333509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.333540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.345072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.345103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.356809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.356837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.368881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.368909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.380784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.380811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.392271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.392302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.404732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.404787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.416090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.416121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.427946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.427973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.441093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.441133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.451894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.451922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.464053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.464085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.475915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.475942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.487514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.487546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.499107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.499133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.510078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.510110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.521964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.522000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.534510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.534541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.546312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.546350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.558086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.558116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.570169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.570198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.268 [2024-07-15 23:16:27.581079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.268 [2024-07-15 23:16:27.581121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.592169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.592195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.602910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.602937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.614031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.614058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.624623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.624648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.635531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.635557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.646091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.646117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.656812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.656839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.667707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.667757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.678368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.678395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.688769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.688797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.701422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.701447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.711292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.711318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.722575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.722602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.733092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.733118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.743374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.743401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.526 [2024-07-15 23:16:27.754090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.526 [2024-07-15 23:16:27.754116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.764571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.764597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.775246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.775272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.786904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.786932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.796132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.796158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.807656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.807682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.818431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.818456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.829473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.829499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.527 [2024-07-15 23:16:27.840554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.527 [2024-07-15 23:16:27.840580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.851337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.851363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.862270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.862295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.872840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.872868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.884139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.884181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.895151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.895177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.905901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.905930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.916923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.916951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.927779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.927821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.938314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.938340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.948863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.948892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.959841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.959870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.970565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.970591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.980764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.980792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:27.991818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:27.991845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.003699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.003730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.015617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.015648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.027119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.027152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.038452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.038484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.049857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.049884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.061301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.061336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.072971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.072999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.084289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.084320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.785 [2024-07-15 23:16:28.098218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.785 [2024-07-15 23:16:28.098273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.109684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.109716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.121148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.121180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.133106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.133138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.144517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.144549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.156409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.156440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.169085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.169117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.181092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.181123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.192719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.192763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.204064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.204095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.215721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.215760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.227219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.227250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.238692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.238724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.250720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.250759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.261941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.261968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.274169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.274200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.285712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.285752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.297663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.297694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.309174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.309205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.320729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.320785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.332937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.332964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.344907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.043 [2024-07-15 23:16:28.344934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.043 [2024-07-15 23:16:28.356669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.044 [2024-07-15 23:16:28.356700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.368674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.368706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.379939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.379966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.392179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.392218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.403586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.403617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.415374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.415405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.427066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.427098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.438655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.438687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.450044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.450075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.461446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.461477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.473072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.473103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.484855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.484881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.496372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.496403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.507969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.507996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.519983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.520025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.531344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.531375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.543150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.543181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.554979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.555008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.566908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.566935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.580609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.580640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.591529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.591560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.602820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.602846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.302 [2024-07-15 23:16:28.614438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.302 [2024-07-15 23:16:28.614477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.626333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.626364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.638111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.638142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.650294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.650325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.661946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.661973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.673471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.673503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.685602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.685633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.697044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.697075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.708873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.708899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.720349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.720381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.731811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.731839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.745633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.745664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.756201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.756233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.768582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.768613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.781137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.781168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.792675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.792706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.804492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.804523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.816304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.816335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.827939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.827967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.839312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.839350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.850936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.850963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.862501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.862533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.561 [2024-07-15 23:16:28.874030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.561 [2024-07-15 23:16:28.874058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.885583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.885614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.897523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.897554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.909256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.909287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.922702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.922733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.933866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.933894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.945896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.945923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.957613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.957644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.969214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.969246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.980454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.980485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:28.992164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:28.992196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.003535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.003566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.015002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.015044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.025867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.025896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.036692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.036718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.049224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.049250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.060780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.060814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.070153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.070179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.081364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.081390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.091747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.091775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.102300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.102326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.113472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.113498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.819 [2024-07-15 23:16:29.124134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.819 [2024-07-15 23:16:29.124161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.134681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.134709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.145755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.145782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.156167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.156193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.166888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.166917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.177387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.177414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.188462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.188488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.198623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.198649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.209124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.209151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.219383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.219421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.229920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.229947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.240711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.240764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.253075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.253116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.263352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.263382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.273845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.273874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.284422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.284448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.294772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.294800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.305067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.305108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.315547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.315573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.326486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.326511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.337685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.337710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.348225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.348251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.359220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.359246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.370044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.370071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.380316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.380341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.078 [2024-07-15 23:16:29.390712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.078 [2024-07-15 23:16:29.390765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.401454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.401479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.412200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.412226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.422829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.422856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.433393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.433420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.444097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.444123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.454544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.454570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.465471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.465497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.476618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.476644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.488358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.488390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.500167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.500199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.511697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.511729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.523588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.523619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.535622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.535653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.547203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.547234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.559151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.559183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.570827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.570853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.582642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.582673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.594173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.594204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.605520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.605551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.617351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.617382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.629189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.629221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.336 [2024-07-15 23:16:29.641120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.336 [2024-07-15 23:16:29.641151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.652873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.652901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.664696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.664727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.676858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.676885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.688608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.688639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.700029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.700060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.711779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.711823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.723521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.723552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.735371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.735402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.746844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.746870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.758481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.758512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.769861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.769887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.781055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.781087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.792463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.792494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.806633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.806664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.818254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.818285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.829673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.829704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.840871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.840898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.594 [2024-07-15 23:16:29.852558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.594 [2024-07-15 23:16:29.852589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.595 [2024-07-15 23:16:29.864251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.595 [2024-07-15 23:16:29.864282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.595 [2024-07-15 23:16:29.875969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.595 [2024-07-15 23:16:29.875996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.595 [2024-07-15 23:16:29.887445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.595 [2024-07-15 23:16:29.887476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.595 [2024-07-15 23:16:29.899221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.595 [2024-07-15 23:16:29.899252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.911039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.911082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.922996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.923041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.934290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.934322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.945590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.945621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.957139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.957181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.969004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.969049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.980604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.980646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:29.992248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:29.992278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.004070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.004114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.015900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.015950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.028491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.028527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.040816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.040845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.053115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.053147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.064815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.064843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.076595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.076625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.088170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.088202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.099414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.099445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.111173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.111198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.123309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.123349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.135317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.135348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.147353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.147385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.853 [2024-07-15 23:16:30.159428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.853 [2024-07-15 23:16:30.159459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.171567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.171597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.183316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.183346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.195188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.195219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.206831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.206858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.218413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.218445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.230237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.111 [2024-07-15 23:16:30.230268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.111 [2024-07-15 23:16:30.242191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.242222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.254326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.254358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.266289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.266320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.278315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.278346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.290183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.290215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.302001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.302059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.313481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.313513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.324702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.324734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.336046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.336073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.349003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.349049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.359444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.359475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.371311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.371345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.382733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.382794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.394413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.394445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.406356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.406387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.112 [2024-07-15 23:16:30.418055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.112 [2024-07-15 23:16:30.418081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.431724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.431765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.441814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.441840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.453383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.453413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.465453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.465484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.477537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.477568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.489143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.489176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.500963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.501001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.511702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.511751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.522599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.522624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.533571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.533597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.544393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.544420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.555201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.555227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.565776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.565814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.577985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.370 [2024-07-15 23:16:30.578027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.370 [2024-07-15 23:16:30.587566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.587591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.598626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.598652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.609416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.609442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.620185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.620211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.632493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.632519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.642868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.642895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.653868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.653896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.664500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.664532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.371 [2024-07-15 23:16:30.675194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.371 [2024-07-15 23:16:30.675220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.686068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.686110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.697148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.697174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.708227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.708253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.718712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.718763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.729129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.729155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.739884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.739911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.750237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.750263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.760838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.760865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.771671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.771705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.782273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.782299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.792846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.792874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.803624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.803650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.814394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.814420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.825049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.825075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.835426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.835453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.845809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.845836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.859534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.859559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.869451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.869477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.879452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.879477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.890822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.890859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.629 [2024-07-15 23:16:30.901517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.629 [2024-07-15 23:16:30.901543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.630 [2024-07-15 23:16:30.911774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.630 [2024-07-15 23:16:30.911816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.630 [2024-07-15 23:16:30.922131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.630 [2024-07-15 23:16:30.922157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.630 [2024-07-15 23:16:30.932060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.630 [2024-07-15 23:16:30.932089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.630 [2024-07-15 23:16:30.942080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.630 [2024-07-15 23:16:30.942123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.887 [2024-07-15 23:16:30.952662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.887 [2024-07-15 23:16:30.952688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.887 [2024-07-15 23:16:30.965845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.887 [2024-07-15 23:16:30.965882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:30.976555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:30.976592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:30.988306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:30.988345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:30.999996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.000041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.011766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.011809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.023345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.023377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.034654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.034685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.048100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.048131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.058565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.058596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.070455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.070487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.081855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.081882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.095663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.095694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.106600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.106631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.118830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.118857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.131245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.131277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.142947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.142973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.154539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.154571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.166285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.166316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.177809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.177835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.189869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.189896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.888 [2024-07-15 23:16:31.201865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.888 [2024-07-15 23:16:31.201907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.213640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.213672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.225321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.225352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.237154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.237185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.248906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.248933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.260400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.260432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.272157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.272188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.283809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.283836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.296054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.296085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.308270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.308302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.320366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.320397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.332066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.332098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.343672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.343703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.355825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.355852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.367345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.367377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.378599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.378631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.390183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.390215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.401610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.401641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.413577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.413608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.425796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.425821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.439247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.439279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.146 [2024-07-15 23:16:31.449841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.146 [2024-07-15 23:16:31.449868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.461943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.461972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.473837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.473864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.485846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.485873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.497799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.497825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.509605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.509636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.521305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.521336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.533131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.533162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.544709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.544750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.556201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.556232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.567619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.567650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.579057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.579097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.590676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.590706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.602129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.602160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.613493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.613524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.624937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.624965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.636524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.636555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.648104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.648135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.659584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.659615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.670881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.670908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.682375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.682407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.694113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.694145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.706459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.706492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.404 [2024-07-15 23:16:31.718416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.404 [2024-07-15 23:16:31.718447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.729981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.730007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.743295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.743326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.754162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.754194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.766992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.767035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.782167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.782199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.792850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.792876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.805520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.805552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.816844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.816871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.828664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.828696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.840454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.840485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.852427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.852459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.864686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.864726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.662 [2024-07-15 23:16:31.876616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.662 [2024-07-15 23:16:31.876648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.887986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.888014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.899862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.899889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.911794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.911820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.923707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.923749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.935598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.935629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.947086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.947119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.958712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.958753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.663 [2024-07-15 23:16:31.970533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.663 [2024-07-15 23:16:31.970564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:31.982442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:31.982474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:31.994345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:31.994376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.006662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.006693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.018523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.018566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.030107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.030138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.041645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.041676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.053311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.053341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.065113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.065144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.077154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.077185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.088864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.088899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.100621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.100652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.112618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.112649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.124162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.124194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.137799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.137826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.148945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.148973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.160161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.160192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.173715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.173761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.184850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.184877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.196509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.196540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.208677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.208708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.218340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.218370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 00:13:16.921 Latency(us) 00:13:16.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.921 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:16.921 Nvme1n1 : 5.01 11128.32 86.94 0.00 0.00 11486.56 4805.97 21651.15 00:13:16.921 =================================================================================================================== 00:13:16.921 Total : 11128.32 86.94 0.00 0.00 11486.56 4805.97 21651.15 00:13:16.921 [2024-07-15 23:16:32.225497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.225525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.921 [2024-07-15 23:16:32.233511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.921 [2024-07-15 23:16:32.233540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.241518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.241540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.249630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.249685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.257633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.257695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.265658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.265707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.273670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.273721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.281711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.281789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.289734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.289794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.297778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.297832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.305802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.305856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.313812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.313866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.321840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.321895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.329866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.329918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.337893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.337948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.345896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.345951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.353859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.353882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.361868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.361890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.369891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.369913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.377912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.377934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.385958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.385988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.394020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.394071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.402070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.402122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.410001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.410051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.418034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.418056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.426057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.426078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.434079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.434121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.442145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.442188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.450178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.450231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.458200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.458255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.466172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.466197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.474193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.474219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 [2024-07-15 23:16:32.482216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.180 [2024-07-15 23:16:32.482243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2315873) - No such process 00:13:17.180 23:16:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2315873 00:13:17.180 23:16:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.180 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.180 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.180 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.438 delay0 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.438 23:16:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:17.438 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.438 [2024-07-15 23:16:32.647872] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.989 Initializing NVMe Controllers 00:13:23.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.989 Initialization complete. Launching workers. 00:13:23.989 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 351 00:13:23.989 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 638, failed to submit 33 00:13:23.989 success 452, unsuccess 186, failed 0 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.989 rmmod nvme_tcp 00:13:23.989 rmmod nvme_fabrics 00:13:23.989 rmmod nvme_keyring 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2314529 ']' 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2314529 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2314529 ']' 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2314529 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314529 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314529' 00:13:23.989 killing process with pid 2314529 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2314529 00:13:23.989 23:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2314529 00:13:23.989 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.989 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.989 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.989 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.990 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.990 23:16:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.990 23:16:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.990 23:16:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.518 23:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.518 00:13:26.518 real 0m28.427s 00:13:26.518 user 0m40.859s 00:13:26.518 sys 0m9.314s 00:13:26.518 23:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.518 23:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.518 ************************************ 00:13:26.518 END TEST nvmf_zcopy 00:13:26.518 ************************************ 00:13:26.518 23:16:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:26.518 23:16:41 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:26.518 23:16:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.518 23:16:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.519 23:16:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.519 ************************************ 00:13:26.519 START TEST nvmf_nmic 00:13:26.519 ************************************ 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:26.519 * Looking for test storage... 00:13:26.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.519 23:16:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.410 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:28.411 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:28.411 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:28.411 Found net devices under 0000:84:00.0: cvl_0_0 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:28.411 Found net devices under 0000:84:00.1: cvl_0_1 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:28.411 00:13:28.411 --- 10.0.0.2 ping statistics --- 00:13:28.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.411 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:13:28.411 00:13:28.411 --- 10.0.0.1 ping statistics --- 00:13:28.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.411 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2319264 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2319264 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2319264 ']' 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.411 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.411 [2024-07-15 23:16:43.461758] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:13:28.411 [2024-07-15 23:16:43.461860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.411 [2024-07-15 23:16:43.526836] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.411 [2024-07-15 23:16:43.637891] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.411 [2024-07-15 23:16:43.637943] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.411 [2024-07-15 23:16:43.637958] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.411 [2024-07-15 23:16:43.637970] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.411 [2024-07-15 23:16:43.637981] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.411 [2024-07-15 23:16:43.638061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.411 [2024-07-15 23:16:43.638128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.411 [2024-07-15 23:16:43.638194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.411 [2024-07-15 23:16:43.638197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 [2024-07-15 23:16:43.793704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 Malloc0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 [2024-07-15 23:16:43.847495] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:28.668 test case1: single bdev can't be used in multiple subsystems 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 [2024-07-15 23:16:43.871358] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:28.668 [2024-07-15 23:16:43.871388] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:28.668 [2024-07-15 23:16:43.871403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.668 request: 00:13:28.668 { 00:13:28.668 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:28.668 "namespace": { 00:13:28.668 "bdev_name": "Malloc0", 00:13:28.668 "no_auto_visible": false 00:13:28.668 }, 00:13:28.668 "method": "nvmf_subsystem_add_ns", 00:13:28.668 "req_id": 1 00:13:28.668 } 00:13:28.668 Got JSON-RPC error response 00:13:28.668 response: 00:13:28.668 { 00:13:28.668 "code": -32602, 00:13:28.668 "message": "Invalid parameters" 00:13:28.668 } 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:28.668 Adding namespace failed - expected result. 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:28.668 test case2: host connect to nvmf target in multiple paths 00:13:28.668 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:28.669 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.669 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.669 [2024-07-15 23:16:43.879460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:28.669 23:16:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.669 23:16:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.234 23:16:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:29.858 23:16:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.858 23:16:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.858 23:16:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.858 23:16:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.858 23:16:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:32.381 23:16:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:32.381 [global] 00:13:32.381 thread=1 00:13:32.381 invalidate=1 00:13:32.381 rw=write 00:13:32.381 time_based=1 00:13:32.381 runtime=1 00:13:32.381 ioengine=libaio 00:13:32.381 direct=1 00:13:32.381 bs=4096 00:13:32.381 iodepth=1 00:13:32.381 norandommap=0 00:13:32.381 numjobs=1 00:13:32.381 00:13:32.381 verify_dump=1 00:13:32.381 verify_backlog=512 00:13:32.381 verify_state_save=0 00:13:32.381 do_verify=1 00:13:32.381 verify=crc32c-intel 00:13:32.381 [job0] 00:13:32.381 filename=/dev/nvme0n1 00:13:32.381 Could not set queue depth (nvme0n1) 00:13:32.381 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:32.381 fio-3.35 00:13:32.381 Starting 1 thread 00:13:33.313 00:13:33.313 job0: (groupid=0, jobs=1): err= 0: pid=2319792: Mon Jul 15 23:16:48 2024 00:13:33.313 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:13:33.313 slat (nsec): min=9836, max=38061, avg=27694.82, stdev=8943.25 00:13:33.313 clat (usec): min=40796, max=41144, avg=40963.64, stdev=82.27 00:13:33.313 lat (usec): min=40830, max=41159, avg=40991.33, stdev=78.13 00:13:33.313 clat percentiles (usec): 00:13:33.313 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:33.313 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:33.313 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:33.313 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:33.313 | 99.99th=[41157] 00:13:33.313 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:13:33.313 slat (usec): min=6, max=29656, avg=70.30, stdev=1311.84 00:13:33.313 clat (usec): min=142, max=334, avg=173.90, stdev=19.18 00:13:33.313 lat (usec): min=148, max=29869, avg=244.20, stdev=1313.81 00:13:33.313 clat percentiles (usec): 00:13:33.313 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:13:33.313 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:13:33.313 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:13:33.313 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 334], 99.95th=[ 334], 00:13:33.313 | 99.99th=[ 334] 00:13:33.313 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:33.313 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:33.313 lat (usec) : 250=95.51%, 500=0.37% 00:13:33.313 lat (msec) : 50=4.12% 00:13:33.313 cpu : usr=0.19%, sys=0.49%, ctx=538, majf=0, minf=2 00:13:33.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.313 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.313 00:13:33.313 Run status group 0 (all jobs): 00:13:33.313 READ: bw=85.5KiB/s (87.6kB/s), 85.5KiB/s-85.5KiB/s (87.6kB/s-87.6kB/s), io=88.0KiB (90.1kB), run=1029-1029msec 00:13:33.313 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:13:33.313 00:13:33.313 Disk stats (read/write): 00:13:33.313 nvme0n1: ios=77/512, merge=0/0, ticks=940/87, in_queue=1027, util=98.70% 00:13:33.313 23:16:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.572 rmmod nvme_tcp 00:13:33.572 rmmod nvme_fabrics 00:13:33.572 rmmod nvme_keyring 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2319264 ']' 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2319264 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2319264 ']' 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2319264 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2319264 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2319264' 00:13:33.572 killing process with pid 2319264 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2319264 00:13:33.572 23:16:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2319264 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.832 23:16:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.381 23:16:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.381 00:13:36.381 real 0m9.867s 00:13:36.381 user 0m22.422s 00:13:36.381 sys 0m2.213s 00:13:36.381 23:16:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.381 23:16:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.381 ************************************ 00:13:36.381 END TEST nvmf_nmic 00:13:36.381 ************************************ 00:13:36.381 23:16:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.381 23:16:51 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:36.381 23:16:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.381 23:16:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.381 23:16:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.381 ************************************ 00:13:36.381 START TEST nvmf_fio_target 00:13:36.381 ************************************ 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:36.381 * Looking for test storage... 00:13:36.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.381 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.382 23:16:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:38.280 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.280 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:38.280 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:38.281 Found net devices under 0000:84:00.0: cvl_0_0 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:38.281 Found net devices under 0000:84:00.1: cvl_0_1 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:13:38.281 00:13:38.281 --- 10.0.0.2 ping statistics --- 00:13:38.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.281 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:38.281 00:13:38.281 --- 10.0.0.1 ping statistics --- 00:13:38.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.281 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2322000 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2322000 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2322000 ']' 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.281 23:16:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 [2024-07-15 23:16:53.368584] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:13:38.281 [2024-07-15 23:16:53.368680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.281 [2024-07-15 23:16:53.439061] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.281 [2024-07-15 23:16:53.559977] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.281 [2024-07-15 23:16:53.560043] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.281 [2024-07-15 23:16:53.560065] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.281 [2024-07-15 23:16:53.560079] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.281 [2024-07-15 23:16:53.560091] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.281 [2024-07-15 23:16:53.560200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.281 [2024-07-15 23:16:53.560259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.281 [2024-07-15 23:16:53.560315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.281 [2024-07-15 23:16:53.560319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.213 23:16:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:39.470 [2024-07-15 23:16:54.589792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.470 23:16:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.728 23:16:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:39.728 23:16:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.985 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:39.985 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.242 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:40.242 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.499 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:40.499 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:40.755 23:16:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.012 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:41.012 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.269 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:41.269 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.526 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:41.526 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:41.782 23:16:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:42.039 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:42.039 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.296 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:42.296 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.553 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.809 [2024-07-15 23:16:57.923877] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.809 23:16:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:43.067 23:16:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:43.324 23:16:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:43.890 23:16:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:46.417 23:17:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:46.417 [global] 00:13:46.417 thread=1 00:13:46.417 invalidate=1 00:13:46.417 rw=write 00:13:46.417 time_based=1 00:13:46.417 runtime=1 00:13:46.417 ioengine=libaio 00:13:46.417 direct=1 00:13:46.417 bs=4096 00:13:46.417 iodepth=1 00:13:46.417 norandommap=0 00:13:46.417 numjobs=1 00:13:46.417 00:13:46.417 verify_dump=1 00:13:46.417 verify_backlog=512 00:13:46.417 verify_state_save=0 00:13:46.417 do_verify=1 00:13:46.417 verify=crc32c-intel 00:13:46.417 [job0] 00:13:46.417 filename=/dev/nvme0n1 00:13:46.417 [job1] 00:13:46.417 filename=/dev/nvme0n2 00:13:46.417 [job2] 00:13:46.417 filename=/dev/nvme0n3 00:13:46.417 [job3] 00:13:46.417 filename=/dev/nvme0n4 00:13:46.417 Could not set queue depth (nvme0n1) 00:13:46.417 Could not set queue depth (nvme0n2) 00:13:46.417 Could not set queue depth (nvme0n3) 00:13:46.417 Could not set queue depth (nvme0n4) 00:13:46.417 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.417 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.417 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.417 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.417 fio-3.35 00:13:46.417 Starting 4 threads 00:13:47.351 00:13:47.351 job0: (groupid=0, jobs=1): err= 0: pid=2323078: Mon Jul 15 23:17:02 2024 00:13:47.351 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:13:47.351 slat (nsec): min=8767, max=17650, avg=14849.09, stdev=2392.65 00:13:47.351 clat (usec): min=302, max=41078, avg=39125.82, stdev=8671.51 00:13:47.351 lat (usec): min=317, max=41095, avg=39140.67, stdev=8671.50 00:13:47.351 clat percentiles (usec): 00:13:47.351 | 1.00th=[ 302], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:47.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:47.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:47.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:47.351 | 99.99th=[41157] 00:13:47.351 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:47.351 slat (nsec): min=8523, max=73039, avg=18681.37, stdev=8921.91 00:13:47.351 clat (usec): min=166, max=517, avg=273.84, stdev=63.20 00:13:47.351 lat (usec): min=176, max=545, avg=292.52, stdev=68.91 00:13:47.351 clat percentiles (usec): 00:13:47.351 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 219], 00:13:47.351 | 30.00th=[ 231], 40.00th=[ 247], 50.00th=[ 269], 60.00th=[ 285], 00:13:47.351 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 392], 00:13:47.351 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 519], 00:13:47.351 | 99.99th=[ 519] 00:13:47.351 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.351 lat (usec) : 250=39.89%, 500=55.81%, 750=0.37% 00:13:47.351 lat (msec) : 50=3.93% 00:13:47.351 cpu : usr=0.79%, sys=0.99%, ctx=535, majf=0, minf=2 00:13:47.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.351 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.351 job1: (groupid=0, jobs=1): err= 0: pid=2323079: Mon Jul 15 23:17:02 2024 00:13:47.351 read: IOPS=1597, BW=6390KiB/s (6543kB/s)(6396KiB/1001msec) 00:13:47.351 slat (nsec): min=5510, max=33032, avg=6701.66, stdev=2347.14 00:13:47.351 clat (usec): min=243, max=603, avg=322.93, stdev=70.65 00:13:47.351 lat (usec): min=249, max=615, avg=329.63, stdev=70.81 00:13:47.351 clat percentiles (usec): 00:13:47.351 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:13:47.351 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:13:47.351 | 70.00th=[ 314], 80.00th=[ 388], 90.00th=[ 457], 95.00th=[ 469], 00:13:47.351 | 99.00th=[ 502], 99.50th=[ 578], 99.90th=[ 603], 99.95th=[ 603], 00:13:47.351 | 99.99th=[ 603] 00:13:47.351 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:47.351 slat (usec): min=6, max=139, avg= 9.93, stdev= 6.84 00:13:47.351 clat (usec): min=146, max=3647, avg=216.82, stdev=96.91 00:13:47.351 lat (usec): min=153, max=3658, avg=226.75, stdev=99.12 00:13:47.351 clat percentiles (usec): 00:13:47.351 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:13:47.351 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 200], 60.00th=[ 217], 00:13:47.351 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 289], 95.00th=[ 338], 00:13:47.351 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[ 490], 99.95th=[ 519], 00:13:47.351 | 99.99th=[ 3654] 00:13:47.351 bw ( KiB/s): min= 8192, max= 8192, per=58.69%, avg=8192.00, stdev= 0.00, samples=1 00:13:47.351 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:47.351 lat (usec) : 250=45.65%, 500=53.80%, 750=0.52% 00:13:47.351 lat (msec) : 4=0.03% 00:13:47.351 cpu : usr=1.50%, sys=4.90%, ctx=3648, majf=0, minf=1 00:13:47.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.351 issued rwts: total=1599,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.351 job2: (groupid=0, jobs=1): err= 0: pid=2323080: Mon Jul 15 23:17:02 2024 00:13:47.351 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:13:47.351 slat (nsec): min=8697, max=19165, avg=15494.00, stdev=2539.63 00:13:47.351 clat (usec): min=264, max=41449, avg=37436.85, stdev=11723.85 00:13:47.351 lat (usec): min=276, max=41459, avg=37452.34, stdev=11723.93 00:13:47.352 clat percentiles (usec): 00:13:47.352 | 1.00th=[ 265], 5.00th=[ 310], 10.00th=[40109], 20.00th=[41157], 00:13:47.352 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:47.352 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:47.352 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:47.352 | 99.99th=[41681] 00:13:47.352 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:13:47.352 slat (nsec): min=9549, max=93565, avg=21863.84, stdev=13243.26 00:13:47.352 clat (usec): min=167, max=651, avg=295.05, stdev=88.99 00:13:47.352 lat (usec): min=179, max=668, avg=316.91, stdev=98.83 00:13:47.352 clat percentiles (usec): 00:13:47.352 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 212], 00:13:47.352 | 30.00th=[ 233], 40.00th=[ 260], 50.00th=[ 277], 60.00th=[ 302], 00:13:47.352 | 70.00th=[ 334], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 453], 00:13:47.352 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 652], 99.95th=[ 652], 00:13:47.352 | 99.99th=[ 652] 00:13:47.352 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.352 lat (usec) : 250=36.07%, 500=58.32%, 750=1.68% 00:13:47.352 lat (msec) : 50=3.93% 00:13:47.352 cpu : usr=0.39%, sys=1.66%, ctx=537, majf=0, minf=1 00:13:47.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.352 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.352 job3: (groupid=0, jobs=1): err= 0: pid=2323081: Mon Jul 15 23:17:02 2024 00:13:47.352 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:13:47.352 slat (nsec): min=8835, max=18678, avg=16043.62, stdev=2327.30 00:13:47.352 clat (usec): min=40625, max=41184, avg=40965.70, stdev=113.24 00:13:47.352 lat (usec): min=40633, max=41198, avg=40981.74, stdev=114.43 00:13:47.352 clat percentiles (usec): 00:13:47.352 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:47.352 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:47.352 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:47.352 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:47.352 | 99.99th=[41157] 00:13:47.352 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:13:47.352 slat (nsec): min=8199, max=78426, avg=20745.99, stdev=10988.56 00:13:47.352 clat (usec): min=180, max=712, avg=276.68, stdev=56.09 00:13:47.352 lat (usec): min=192, max=739, avg=297.43, stdev=62.33 00:13:47.352 clat percentiles (usec): 00:13:47.352 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:13:47.352 | 30.00th=[ 241], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 281], 00:13:47.352 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 383], 00:13:47.352 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 709], 99.95th=[ 709], 00:13:47.352 | 99.99th=[ 709] 00:13:47.352 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:13:47.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:47.352 lat (usec) : 250=35.27%, 500=60.60%, 750=0.19% 00:13:47.352 lat (msec) : 50=3.94% 00:13:47.352 cpu : usr=0.49%, sys=1.18%, ctx=535, majf=0, minf=1 00:13:47.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.352 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.352 00:13:47.352 Run status group 0 (all jobs): 00:13:47.352 READ: bw=6485KiB/s (6641kB/s), 82.7KiB/s-6390KiB/s (84.7kB/s-6543kB/s), io=6660KiB (6820kB), run=1001-1027msec 00:13:47.352 WRITE: bw=13.6MiB/s (14.3MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1027msec 00:13:47.352 00:13:47.352 Disk stats (read/write): 00:13:47.352 nvme0n1: ios=67/512, merge=0/0, ticks=686/127, in_queue=813, util=86.07% 00:13:47.352 nvme0n2: ios=1550/1536, merge=0/0, ticks=460/309, in_queue=769, util=86.34% 00:13:47.352 nvme0n3: ios=74/512, merge=0/0, ticks=1122/143, in_queue=1265, util=97.37% 00:13:47.352 nvme0n4: ios=41/512, merge=0/0, ticks=1603/141, in_queue=1744, util=97.35% 00:13:47.352 23:17:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:47.352 [global] 00:13:47.352 thread=1 00:13:47.352 invalidate=1 00:13:47.352 rw=randwrite 00:13:47.352 time_based=1 00:13:47.352 runtime=1 00:13:47.352 ioengine=libaio 00:13:47.352 direct=1 00:13:47.352 bs=4096 00:13:47.352 iodepth=1 00:13:47.352 norandommap=0 00:13:47.352 numjobs=1 00:13:47.352 00:13:47.352 verify_dump=1 00:13:47.352 verify_backlog=512 00:13:47.352 verify_state_save=0 00:13:47.352 do_verify=1 00:13:47.352 verify=crc32c-intel 00:13:47.352 [job0] 00:13:47.352 filename=/dev/nvme0n1 00:13:47.352 [job1] 00:13:47.352 filename=/dev/nvme0n2 00:13:47.352 [job2] 00:13:47.352 filename=/dev/nvme0n3 00:13:47.352 [job3] 00:13:47.352 filename=/dev/nvme0n4 00:13:47.610 Could not set queue depth (nvme0n1) 00:13:47.610 Could not set queue depth (nvme0n2) 00:13:47.610 Could not set queue depth (nvme0n3) 00:13:47.610 Could not set queue depth (nvme0n4) 00:13:47.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.610 fio-3.35 00:13:47.610 Starting 4 threads 00:13:48.979 00:13:48.979 job0: (groupid=0, jobs=1): err= 0: pid=2323307: Mon Jul 15 23:17:04 2024 00:13:48.979 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1010msec) 00:13:48.979 slat (nsec): min=9848, max=38384, avg=17444.57, stdev=9583.74 00:13:48.979 clat (usec): min=40892, max=42048, avg=41026.68, stdev=239.74 00:13:48.979 lat (usec): min=40915, max=42062, avg=41044.13, stdev=238.39 00:13:48.979 clat percentiles (usec): 00:13:48.979 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:48.979 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:48.979 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:48.979 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.979 | 99.99th=[42206] 00:13:48.979 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:13:48.979 slat (usec): min=9, max=105, avg=19.02, stdev=14.62 00:13:48.979 clat (usec): min=188, max=481, avg=264.62, stdev=46.75 00:13:48.979 lat (usec): min=200, max=513, avg=283.64, stdev=53.67 00:13:48.979 clat percentiles (usec): 00:13:48.979 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 223], 00:13:48.979 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 262], 60.00th=[ 273], 00:13:48.979 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 351], 00:13:48.979 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 482], 99.95th=[ 482], 00:13:48.979 | 99.99th=[ 482] 00:13:48.979 bw ( KiB/s): min= 4096, max= 4096, per=28.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.979 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.979 lat (usec) : 250=41.09%, 500=54.97% 00:13:48.979 lat (msec) : 50=3.94% 00:13:48.979 cpu : usr=0.59%, sys=1.09%, ctx=534, majf=0, minf=1 00:13:48.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.980 job1: (groupid=0, jobs=1): err= 0: pid=2323308: Mon Jul 15 23:17:04 2024 00:13:48.980 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:13:48.980 slat (nsec): min=5531, max=27383, avg=6707.68, stdev=1916.20 00:13:48.980 clat (usec): min=237, max=533, avg=311.89, stdev=67.24 00:13:48.980 lat (usec): min=243, max=539, avg=318.60, stdev=67.23 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:13:48.980 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:13:48.980 | 70.00th=[ 306], 80.00th=[ 388], 90.00th=[ 437], 95.00th=[ 457], 00:13:48.980 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 537], 00:13:48.980 | 99.99th=[ 537] 00:13:48.980 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:48.980 slat (usec): min=6, max=125, avg= 9.89, stdev= 6.35 00:13:48.980 clat (usec): min=145, max=520, avg=220.69, stdev=73.06 00:13:48.980 lat (usec): min=152, max=548, avg=230.58, stdev=75.40 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:13:48.980 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 212], 00:13:48.980 | 70.00th=[ 235], 80.00th=[ 260], 90.00th=[ 334], 95.00th=[ 400], 00:13:48.980 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 502], 99.95th=[ 519], 00:13:48.980 | 99.99th=[ 523] 00:13:48.980 bw ( KiB/s): min= 8192, max= 8192, per=57.71%, avg=8192.00, stdev= 0.00, samples=1 00:13:48.980 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:48.980 lat (usec) : 250=44.94%, 500=54.92%, 750=0.14% 00:13:48.980 cpu : usr=1.80%, sys=4.50%, ctx=3677, majf=0, minf=2 00:13:48.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.980 job2: (groupid=0, jobs=1): err= 0: pid=2323309: Mon Jul 15 23:17:04 2024 00:13:48.980 read: IOPS=337, BW=1352KiB/s (1384kB/s)(1364KiB/1009msec) 00:13:48.980 slat (nsec): min=7957, max=19816, avg=9430.75, stdev=1963.00 00:13:48.980 clat (usec): min=262, max=42031, avg=2527.30, stdev=9124.56 00:13:48.980 lat (usec): min=270, max=42046, avg=2536.73, stdev=9125.72 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 310], 00:13:48.980 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 408], 60.00th=[ 424], 00:13:48.980 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 457], 95.00th=[40633], 00:13:48.980 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.980 | 99.99th=[42206] 00:13:48.980 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:48.980 slat (nsec): min=7558, max=64377, avg=17139.37, stdev=8361.12 00:13:48.980 clat (usec): min=179, max=1267, avg=256.47, stdev=64.34 00:13:48.980 lat (usec): min=187, max=1277, avg=273.60, stdev=66.16 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 225], 00:13:48.980 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 255], 00:13:48.980 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 00:13:48.980 | 99.00th=[ 383], 99.50th=[ 668], 99.90th=[ 1270], 99.95th=[ 1270], 00:13:48.980 | 99.99th=[ 1270] 00:13:48.980 bw ( KiB/s): min= 4096, max= 4096, per=28.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.980 lat (usec) : 250=34.23%, 500=63.31%, 750=0.23% 00:13:48.980 lat (msec) : 2=0.12%, 50=2.11% 00:13:48.980 cpu : usr=1.09%, sys=0.69%, ctx=854, majf=0, minf=1 00:13:48.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 issued rwts: total=341,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.980 job3: (groupid=0, jobs=1): err= 0: pid=2323310: Mon Jul 15 23:17:04 2024 00:13:48.980 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:13:48.980 slat (nsec): min=10841, max=17699, avg=14714.76, stdev=1379.81 00:13:48.980 clat (usec): min=40819, max=41933, avg=41023.64, stdev=217.20 00:13:48.980 lat (usec): min=40830, max=41948, avg=41038.35, stdev=217.38 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:48.980 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:48.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:48.980 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:48.980 | 99.99th=[41681] 00:13:48.980 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:48.980 slat (nsec): min=7020, max=68753, avg=16155.97, stdev=8282.88 00:13:48.980 clat (usec): min=174, max=488, avg=252.81, stdev=53.65 00:13:48.980 lat (usec): min=184, max=510, avg=268.97, stdev=56.73 00:13:48.980 clat percentiles (usec): 00:13:48.980 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 217], 00:13:48.980 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:13:48.980 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 388], 00:13:48.980 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 490], 99.95th=[ 490], 00:13:48.980 | 99.99th=[ 490] 00:13:48.980 bw ( KiB/s): min= 4096, max= 4096, per=28.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.980 lat (usec) : 250=56.47%, 500=39.59% 00:13:48.980 lat (msec) : 50=3.94% 00:13:48.980 cpu : usr=0.50%, sys=0.80%, ctx=533, majf=0, minf=1 00:13:48.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.980 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.980 00:13:48.980 Run status group 0 (all jobs): 00:13:48.980 READ: bw=7964KiB/s (8156kB/s), 83.2KiB/s-6505KiB/s (85.2kB/s-6662kB/s), io=8044KiB (8237kB), run=1001-1010msec 00:13:48.980 WRITE: bw=13.9MiB/s (14.5MB/s), 2028KiB/s-8184KiB/s (2076kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1010msec 00:13:48.980 00:13:48.980 Disk stats (read/write): 00:13:48.980 nvme0n1: ios=41/512, merge=0/0, ticks=1687/128, in_queue=1815, util=99.90% 00:13:48.980 nvme0n2: ios=1551/1699, merge=0/0, ticks=801/322, in_queue=1123, util=91.25% 00:13:48.980 nvme0n3: ios=384/512, merge=0/0, ticks=849/127, in_queue=976, util=97.28% 00:13:48.980 nvme0n4: ios=17/512, merge=0/0, ticks=698/129, in_queue=827, util=89.67% 00:13:48.980 23:17:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:48.980 [global] 00:13:48.980 thread=1 00:13:48.980 invalidate=1 00:13:48.980 rw=write 00:13:48.980 time_based=1 00:13:48.980 runtime=1 00:13:48.980 ioengine=libaio 00:13:48.980 direct=1 00:13:48.980 bs=4096 00:13:48.980 iodepth=128 00:13:48.980 norandommap=0 00:13:48.980 numjobs=1 00:13:48.980 00:13:48.980 verify_dump=1 00:13:48.980 verify_backlog=512 00:13:48.980 verify_state_save=0 00:13:48.980 do_verify=1 00:13:48.980 verify=crc32c-intel 00:13:48.980 [job0] 00:13:48.980 filename=/dev/nvme0n1 00:13:48.980 [job1] 00:13:48.980 filename=/dev/nvme0n2 00:13:48.980 [job2] 00:13:48.980 filename=/dev/nvme0n3 00:13:48.980 [job3] 00:13:48.980 filename=/dev/nvme0n4 00:13:48.980 Could not set queue depth (nvme0n1) 00:13:48.980 Could not set queue depth (nvme0n2) 00:13:48.980 Could not set queue depth (nvme0n3) 00:13:48.980 Could not set queue depth (nvme0n4) 00:13:49.238 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.238 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.238 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.238 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.238 fio-3.35 00:13:49.238 Starting 4 threads 00:13:50.611 00:13:50.611 job0: (groupid=0, jobs=1): err= 0: pid=2323536: Mon Jul 15 23:17:05 2024 00:13:50.611 read: IOPS=5234, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1002msec) 00:13:50.611 slat (usec): min=3, max=5448, avg=91.03, stdev=496.07 00:13:50.611 clat (usec): min=920, max=18763, avg=11703.66, stdev=1697.08 00:13:50.611 lat (usec): min=5205, max=18769, avg=11794.69, stdev=1734.31 00:13:50.611 clat percentiles (usec): 00:13:50.611 | 1.00th=[ 5932], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10945], 00:13:50.611 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:13:50.611 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13435], 95.00th=[14615], 00:13:50.611 | 99.00th=[17695], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:13:50.611 | 99.99th=[18744] 00:13:50.611 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:13:50.611 slat (usec): min=5, max=5452, avg=86.04, stdev=485.60 00:13:50.611 clat (usec): min=5470, max=19789, avg=11617.11, stdev=1650.38 00:13:50.611 lat (usec): min=5478, max=19801, avg=11703.15, stdev=1667.01 00:13:50.611 clat percentiles (usec): 00:13:50.611 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10814], 00:13:50.611 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:13:50.611 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[15139], 00:13:50.611 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:13:50.611 | 99.99th=[19792] 00:13:50.611 bw ( KiB/s): min=22528, max=22549, per=28.87%, avg=22538.50, stdev=14.85, samples=2 00:13:50.611 iops : min= 5632, max= 5637, avg=5634.50, stdev= 3.54, samples=2 00:13:50.611 lat (usec) : 1000=0.01% 00:13:50.611 lat (msec) : 10=8.23%, 20=91.76% 00:13:50.611 cpu : usr=6.29%, sys=6.69%, ctx=504, majf=0, minf=11 00:13:50.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:50.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.611 issued rwts: total=5245,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.611 job1: (groupid=0, jobs=1): err= 0: pid=2323543: Mon Jul 15 23:17:05 2024 00:13:50.611 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:13:50.611 slat (usec): min=3, max=5458, avg=94.93, stdev=513.12 00:13:50.611 clat (usec): min=7077, max=17643, avg=11917.28, stdev=1616.61 00:13:50.612 lat (usec): min=7310, max=17659, avg=12012.21, stdev=1658.60 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[11207], 00:13:50.612 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:13:50.612 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:13:50.612 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:13:50.612 | 99.99th=[17695] 00:13:50.612 write: IOPS=5443, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1002msec); 0 zone resets 00:13:50.612 slat (usec): min=5, max=5919, avg=87.42, stdev=371.51 00:13:50.612 clat (usec): min=551, max=18724, avg=12055.77, stdev=1558.17 00:13:50.612 lat (usec): min=6006, max=19269, avg=12143.20, stdev=1566.48 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11469], 00:13:50.612 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:13:50.612 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[15008], 00:13:50.612 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:13:50.612 | 99.99th=[18744] 00:13:50.612 bw ( KiB/s): min=21048, max=21560, per=27.29%, avg=21304.00, stdev=362.04, samples=2 00:13:50.612 iops : min= 5262, max= 5390, avg=5326.00, stdev=90.51, samples=2 00:13:50.612 lat (usec) : 750=0.01% 00:13:50.612 lat (msec) : 10=9.09%, 20=90.90% 00:13:50.612 cpu : usr=5.89%, sys=7.19%, ctx=694, majf=0, minf=13 00:13:50.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:50.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.612 issued rwts: total=5120,5454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.612 job2: (groupid=0, jobs=1): err= 0: pid=2323546: Mon Jul 15 23:17:05 2024 00:13:50.612 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.6MiB/1055msec) 00:13:50.612 slat (usec): min=2, max=15963, avg=109.18, stdev=683.54 00:13:50.612 clat (usec): min=6151, max=66130, avg=15657.63, stdev=8964.85 00:13:50.612 lat (usec): min=6155, max=66147, avg=15766.81, stdev=8976.02 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11469], 20.00th=[12649], 00:13:50.612 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13829], 00:13:50.612 | 70.00th=[14353], 80.00th=[15926], 90.00th=[18482], 95.00th=[28443], 00:13:50.612 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:13:50.612 | 99.99th=[66323] 00:13:50.612 write: IOPS=4367, BW=17.1MiB/s (17.9MB/s)(18.0MiB/1055msec); 0 zone resets 00:13:50.612 slat (usec): min=3, max=47166, avg=104.42, stdev=865.55 00:13:50.612 clat (usec): min=7250, max=58579, avg=13938.51, stdev=4486.79 00:13:50.612 lat (usec): min=7566, max=58596, avg=14042.94, stdev=4539.27 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[11469], 20.00th=[12911], 00:13:50.612 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:13:50.612 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14484], 95.00th=[16909], 00:13:50.612 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:13:50.612 | 99.99th=[58459] 00:13:50.612 bw ( KiB/s): min=17242, max=19656, per=23.63%, avg=18449.00, stdev=1706.96, samples=2 00:13:50.612 iops : min= 4310, max= 4914, avg=4612.00, stdev=427.09, samples=2 00:13:50.612 lat (msec) : 10=4.89%, 20=90.19%, 50=3.49%, 100=1.43% 00:13:50.612 cpu : usr=3.61%, sys=5.98%, ctx=486, majf=0, minf=13 00:13:50.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:50.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.612 issued rwts: total=4253,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.612 job3: (groupid=0, jobs=1): err= 0: pid=2323547: Mon Jul 15 23:17:05 2024 00:13:50.612 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:13:50.612 slat (usec): min=3, max=12191, avg=117.81, stdev=825.84 00:13:50.612 clat (usec): min=4588, max=25178, avg=14424.48, stdev=3464.61 00:13:50.612 lat (usec): min=4594, max=25183, avg=14542.29, stdev=3515.46 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 5604], 5.00th=[10290], 10.00th=[12256], 20.00th=[12649], 00:13:50.612 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:13:50.612 | 70.00th=[14484], 80.00th=[16712], 90.00th=[20055], 95.00th=[22152], 00:13:50.612 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:13:50.612 | 99.99th=[25297] 00:13:50.612 write: IOPS=4833, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1013msec); 0 zone resets 00:13:50.612 slat (usec): min=4, max=15236, avg=88.00, stdev=501.14 00:13:50.612 clat (usec): min=1268, max=28246, avg=12659.49, stdev=3247.17 00:13:50.612 lat (usec): min=1278, max=28253, avg=12747.49, stdev=3273.90 00:13:50.612 clat percentiles (usec): 00:13:50.612 | 1.00th=[ 3720], 5.00th=[ 6652], 10.00th=[ 8455], 20.00th=[11469], 00:13:50.612 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13566], 60.00th=[13698], 00:13:50.612 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14222], 95.00th=[14877], 00:13:50.612 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:13:50.612 | 99.99th=[28181] 00:13:50.612 bw ( KiB/s): min=17680, max=20464, per=24.43%, avg=19072.00, stdev=1968.59, samples=2 00:13:50.612 iops : min= 4420, max= 5116, avg=4768.00, stdev=492.15, samples=2 00:13:50.612 lat (msec) : 2=0.02%, 4=0.63%, 10=9.53%, 20=83.45%, 50=6.37% 00:13:50.612 cpu : usr=4.84%, sys=6.13%, ctx=560, majf=0, minf=13 00:13:50.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:50.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.612 issued rwts: total=4608,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.612 00:13:50.612 Run status group 0 (all jobs): 00:13:50.612 READ: bw=71.2MiB/s (74.6MB/s), 15.7MiB/s-20.4MiB/s (16.5MB/s-21.4MB/s), io=75.1MiB (78.7MB), run=1002-1055msec 00:13:50.612 WRITE: bw=76.2MiB/s (79.9MB/s), 17.1MiB/s-22.0MiB/s (17.9MB/s-23.0MB/s), io=80.4MiB (84.3MB), run=1002-1055msec 00:13:50.612 00:13:50.612 Disk stats (read/write): 00:13:50.612 nvme0n1: ios=4629/4703, merge=0/0, ticks=27180/24641, in_queue=51821, util=96.69% 00:13:50.612 nvme0n2: ios=4373/4608, merge=0/0, ticks=26916/25850, in_queue=52766, util=90.85% 00:13:50.612 nvme0n3: ios=3633/3943, merge=0/0, ticks=26576/27703, in_queue=54279, util=94.36% 00:13:50.612 nvme0n4: ios=3850/4096, merge=0/0, ticks=54781/50097, in_queue=104878, util=98.10% 00:13:50.612 23:17:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:50.612 [global] 00:13:50.612 thread=1 00:13:50.612 invalidate=1 00:13:50.612 rw=randwrite 00:13:50.612 time_based=1 00:13:50.612 runtime=1 00:13:50.612 ioengine=libaio 00:13:50.612 direct=1 00:13:50.612 bs=4096 00:13:50.612 iodepth=128 00:13:50.612 norandommap=0 00:13:50.612 numjobs=1 00:13:50.612 00:13:50.612 verify_dump=1 00:13:50.612 verify_backlog=512 00:13:50.612 verify_state_save=0 00:13:50.612 do_verify=1 00:13:50.612 verify=crc32c-intel 00:13:50.612 [job0] 00:13:50.612 filename=/dev/nvme0n1 00:13:50.612 [job1] 00:13:50.612 filename=/dev/nvme0n2 00:13:50.612 [job2] 00:13:50.612 filename=/dev/nvme0n3 00:13:50.612 [job3] 00:13:50.612 filename=/dev/nvme0n4 00:13:50.612 Could not set queue depth (nvme0n1) 00:13:50.612 Could not set queue depth (nvme0n2) 00:13:50.612 Could not set queue depth (nvme0n3) 00:13:50.612 Could not set queue depth (nvme0n4) 00:13:50.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.612 fio-3.35 00:13:50.612 Starting 4 threads 00:13:51.984 00:13:51.984 job0: (groupid=0, jobs=1): err= 0: pid=2323892: Mon Jul 15 23:17:07 2024 00:13:51.984 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1005msec) 00:13:51.984 slat (usec): min=3, max=14241, avg=128.99, stdev=849.97 00:13:51.984 clat (usec): min=4008, max=47712, avg=15758.75, stdev=6226.83 00:13:51.984 lat (usec): min=6877, max=55260, avg=15887.74, stdev=6301.91 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[11338], 20.00th=[11731], 00:13:51.984 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[14484], 00:13:51.984 | 70.00th=[15533], 80.00th=[17171], 90.00th=[22676], 95.00th=[29492], 00:13:51.984 | 99.00th=[40109], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:13:51.984 | 99.99th=[47973] 00:13:51.984 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:51.984 slat (usec): min=3, max=29056, avg=191.74, stdev=1161.36 00:13:51.984 clat (usec): min=6125, max=70803, avg=26112.43, stdev=16175.51 00:13:51.984 lat (usec): min=6247, max=70830, avg=26304.17, stdev=16285.25 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[ 7767], 5.00th=[11207], 10.00th=[11469], 20.00th=[12649], 00:13:51.984 | 30.00th=[14615], 40.00th=[17695], 50.00th=[20055], 60.00th=[23987], 00:13:51.984 | 70.00th=[27132], 80.00th=[39060], 90.00th=[53740], 95.00th=[62129], 00:13:51.984 | 99.00th=[65799], 99.50th=[66323], 99.90th=[70779], 99.95th=[70779], 00:13:51.984 | 99.99th=[70779] 00:13:51.984 bw ( KiB/s): min= 8200, max=16376, per=19.99%, avg=12288.00, stdev=5781.31, samples=2 00:13:51.984 iops : min= 2050, max= 4094, avg=3072.00, stdev=1445.33, samples=2 00:13:51.984 lat (msec) : 10=3.71%, 20=61.86%, 50=27.47%, 100=6.96% 00:13:51.984 cpu : usr=3.29%, sys=4.88%, ctx=302, majf=0, minf=13 00:13:51.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:51.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.984 issued rwts: total=2938,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.984 job1: (groupid=0, jobs=1): err= 0: pid=2323893: Mon Jul 15 23:17:07 2024 00:13:51.984 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:13:51.984 slat (usec): min=3, max=8604, avg=93.17, stdev=539.65 00:13:51.984 clat (usec): min=5101, max=35520, avg=12306.37, stdev=4119.65 00:13:51.984 lat (usec): min=5107, max=35562, avg=12399.54, stdev=4160.12 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10028], 00:13:51.984 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11731], 00:13:51.984 | 70.00th=[12256], 80.00th=[13173], 90.00th=[16581], 95.00th=[24511], 00:13:51.984 | 99.00th=[26870], 99.50th=[27657], 99.90th=[29492], 99.95th=[30278], 00:13:51.984 | 99.99th=[35390] 00:13:51.984 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(21.1MiB/1005msec); 0 zone resets 00:13:51.984 slat (usec): min=3, max=10980, avg=88.29, stdev=518.86 00:13:51.984 clat (usec): min=584, max=30292, avg=11810.63, stdev=3804.25 00:13:51.984 lat (usec): min=4360, max=30305, avg=11898.93, stdev=3834.37 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[ 5407], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[ 9634], 00:13:51.984 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:13:51.984 | 70.00th=[11600], 80.00th=[12649], 90.00th=[17695], 95.00th=[20841], 00:13:51.984 | 99.00th=[26084], 99.50th=[28181], 99.90th=[29492], 99.95th=[30278], 00:13:51.984 | 99.99th=[30278] 00:13:51.984 bw ( KiB/s): min=17760, max=24504, per=34.37%, avg=21132.00, stdev=4768.73, samples=2 00:13:51.984 iops : min= 4440, max= 6126, avg=5283.00, stdev=1192.18, samples=2 00:13:51.984 lat (usec) : 750=0.01% 00:13:51.984 lat (msec) : 10=23.24%, 20=70.08%, 50=6.68% 00:13:51.984 cpu : usr=5.48%, sys=9.16%, ctx=430, majf=0, minf=13 00:13:51.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:51.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.984 issued rwts: total=5120,5411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.984 job2: (groupid=0, jobs=1): err= 0: pid=2323894: Mon Jul 15 23:17:07 2024 00:13:51.984 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.4MiB/1042msec) 00:13:51.984 slat (usec): min=3, max=9462, avg=114.57, stdev=617.72 00:13:51.984 clat (usec): min=9074, max=54064, avg=15340.40, stdev=5877.34 00:13:51.984 lat (usec): min=9081, max=54071, avg=15454.97, stdev=5900.35 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11863], 20.00th=[12518], 00:13:51.984 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14222], 00:13:51.984 | 70.00th=[15139], 80.00th=[15795], 90.00th=[18744], 95.00th=[24511], 00:13:51.984 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50594], 99.95th=[54264], 00:13:51.984 | 99.99th=[54264] 00:13:51.984 write: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1042msec); 0 zone resets 00:13:51.984 slat (usec): min=4, max=19879, avg=132.60, stdev=768.97 00:13:51.984 clat (usec): min=6983, max=72606, avg=18279.61, stdev=11078.69 00:13:51.984 lat (usec): min=7009, max=72646, avg=18412.21, stdev=11151.42 00:13:51.984 clat percentiles (usec): 00:13:51.984 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[12518], 20.00th=[12911], 00:13:51.984 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14091], 60.00th=[14877], 00:13:51.984 | 70.00th=[16712], 80.00th=[19530], 90.00th=[30802], 95.00th=[39584], 00:13:51.984 | 99.00th=[65799], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:13:51.984 | 99.99th=[72877] 00:13:51.984 bw ( KiB/s): min=16072, max=16384, per=26.39%, avg=16228.00, stdev=220.62, samples=2 00:13:51.984 iops : min= 4018, max= 4096, avg=4057.00, stdev=55.15, samples=2 00:13:51.984 lat (msec) : 10=2.14%, 20=84.18%, 50=11.46%, 100=2.22% 00:13:51.984 cpu : usr=4.13%, sys=6.15%, ctx=382, majf=0, minf=13 00:13:51.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:51.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.985 issued rwts: total=3698,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.985 job3: (groupid=0, jobs=1): err= 0: pid=2323895: Mon Jul 15 23:17:07 2024 00:13:51.985 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:13:51.985 slat (usec): min=3, max=14671, avg=136.47, stdev=975.43 00:13:51.985 clat (usec): min=4611, max=40074, avg=16036.67, stdev=4991.45 00:13:51.985 lat (usec): min=4618, max=40079, avg=16173.13, stdev=5068.54 00:13:51.985 clat percentiles (usec): 00:13:51.985 | 1.00th=[ 7963], 5.00th=[11600], 10.00th=[11863], 20.00th=[12518], 00:13:51.985 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[15533], 00:13:51.985 | 70.00th=[17433], 80.00th=[19792], 90.00th=[22676], 95.00th=[25560], 00:13:51.985 | 99.00th=[34341], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:13:51.985 | 99.99th=[40109] 00:13:51.985 write: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1012msec); 0 zone resets 00:13:51.985 slat (usec): min=4, max=14786, avg=157.69, stdev=764.41 00:13:51.985 clat (usec): min=3215, max=74606, avg=22994.50, stdev=14920.31 00:13:51.985 lat (usec): min=3221, max=74615, avg=23152.18, stdev=15028.43 00:13:51.985 clat percentiles (usec): 00:13:51.985 | 1.00th=[ 4817], 5.00th=[ 7767], 10.00th=[ 9634], 20.00th=[12649], 00:13:51.985 | 30.00th=[13960], 40.00th=[15270], 50.00th=[18482], 60.00th=[21627], 00:13:51.985 | 70.00th=[24249], 80.00th=[31065], 90.00th=[43779], 95.00th=[59507], 00:13:51.985 | 99.00th=[71828], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:13:51.985 | 99.99th=[74974] 00:13:51.985 bw ( KiB/s): min=10104, max=16384, per=21.54%, avg=13244.00, stdev=4440.63, samples=2 00:13:51.985 iops : min= 2526, max= 4096, avg=3311.00, stdev=1110.16, samples=2 00:13:51.985 lat (msec) : 4=0.28%, 10=7.86%, 20=59.06%, 50=28.80%, 100=3.99% 00:13:51.985 cpu : usr=3.07%, sys=3.86%, ctx=393, majf=0, minf=11 00:13:51.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:51.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.985 issued rwts: total=3072,3438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.985 00:13:51.985 Run status group 0 (all jobs): 00:13:51.985 READ: bw=55.6MiB/s (58.3MB/s), 11.4MiB/s-19.9MiB/s (12.0MB/s-20.9MB/s), io=57.9MiB (60.7MB), run=1005-1042msec 00:13:51.985 WRITE: bw=60.0MiB/s (63.0MB/s), 11.9MiB/s-21.0MiB/s (12.5MB/s-22.1MB/s), io=62.6MiB (65.6MB), run=1005-1042msec 00:13:51.985 00:13:51.985 Disk stats (read/write): 00:13:51.985 nvme0n1: ios=2610/2775, merge=0/0, ticks=20440/33530, in_queue=53970, util=86.67% 00:13:51.985 nvme0n2: ios=4147/4471, merge=0/0, ticks=26068/24281, in_queue=50349, util=98.78% 00:13:51.985 nvme0n3: ios=3608/3623, merge=0/0, ticks=20915/20809, in_queue=41724, util=97.70% 00:13:51.985 nvme0n4: ios=2573/2959, merge=0/0, ticks=40650/62049, in_queue=102699, util=90.51% 00:13:51.985 23:17:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:51.985 23:17:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2324031 00:13:51.985 23:17:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:51.985 23:17:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:51.985 [global] 00:13:51.985 thread=1 00:13:51.985 invalidate=1 00:13:51.985 rw=read 00:13:51.985 time_based=1 00:13:51.985 runtime=10 00:13:51.985 ioengine=libaio 00:13:51.985 direct=1 00:13:51.985 bs=4096 00:13:51.985 iodepth=1 00:13:51.985 norandommap=1 00:13:51.985 numjobs=1 00:13:51.985 00:13:51.985 [job0] 00:13:51.985 filename=/dev/nvme0n1 00:13:51.985 [job1] 00:13:51.985 filename=/dev/nvme0n2 00:13:51.985 [job2] 00:13:51.985 filename=/dev/nvme0n3 00:13:51.985 [job3] 00:13:51.985 filename=/dev/nvme0n4 00:13:51.985 Could not set queue depth (nvme0n1) 00:13:51.985 Could not set queue depth (nvme0n2) 00:13:51.985 Could not set queue depth (nvme0n3) 00:13:51.985 Could not set queue depth (nvme0n4) 00:13:51.985 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.985 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.985 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.985 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.985 fio-3.35 00:13:51.985 Starting 4 threads 00:13:55.320 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:55.320 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:55.320 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3522560, buflen=4096 00:13:55.320 fio: pid=2324122, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:55.320 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.320 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:55.603 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=31670272, buflen=4096 00:13:55.603 fio: pid=2324121, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:55.603 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24178688, buflen=4096 00:13:55.603 fio: pid=2324119, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:55.603 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.603 23:17:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:55.861 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.861 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:55.861 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16752640, buflen=4096 00:13:55.861 fio: pid=2324120, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.120 00:13:56.120 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2324119: Mon Jul 15 23:17:11 2024 00:13:56.120 read: IOPS=1714, BW=6856KiB/s (7021kB/s)(23.1MiB/3444msec) 00:13:56.120 slat (usec): min=5, max=11652, avg=15.91, stdev=229.24 00:13:56.120 clat (usec): min=212, max=41264, avg=563.21, stdev=2967.87 00:13:56.120 lat (usec): min=218, max=41275, avg=579.12, stdev=2977.29 00:13:56.120 clat percentiles (usec): 00:13:56.120 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 262], 00:13:56.120 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 318], 00:13:56.120 | 70.00th=[ 367], 80.00th=[ 449], 90.00th=[ 498], 95.00th=[ 537], 00:13:56.120 | 99.00th=[ 742], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:13:56.120 | 99.99th=[41157] 00:13:56.120 bw ( KiB/s): min= 96, max=13288, per=30.76%, avg=6141.33, stdev=4945.48, samples=6 00:13:56.120 iops : min= 24, max= 3322, avg=1535.33, stdev=1236.37, samples=6 00:13:56.120 lat (usec) : 250=9.49%, 500=81.01%, 750=8.49%, 1000=0.27% 00:13:56.120 lat (msec) : 2=0.14%, 4=0.03%, 20=0.02%, 50=0.54% 00:13:56.120 cpu : usr=0.99%, sys=2.85%, ctx=5911, majf=0, minf=1 00:13:56.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.120 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2324120: Mon Jul 15 23:17:11 2024 00:13:56.120 read: IOPS=1098, BW=4393KiB/s (4499kB/s)(16.0MiB/3724msec) 00:13:56.120 slat (usec): min=4, max=13808, avg=17.92, stdev=294.69 00:13:56.120 clat (usec): min=218, max=42424, avg=882.86, stdev=4587.73 00:13:56.120 lat (usec): min=223, max=54931, avg=900.77, stdev=4654.80 00:13:56.120 clat percentiles (usec): 00:13:56.120 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:13:56.120 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 338], 00:13:56.120 | 70.00th=[ 433], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 570], 00:13:56.120 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:56.120 | 99.99th=[42206] 00:13:56.120 bw ( KiB/s): min= 96, max=12448, per=23.37%, avg=4665.29, stdev=4986.93, samples=7 00:13:56.120 iops : min= 24, max= 3112, avg=1166.29, stdev=1246.77, samples=7 00:13:56.120 lat (usec) : 250=2.54%, 500=84.99%, 750=10.66%, 1000=0.37% 00:13:56.120 lat (msec) : 2=0.10%, 4=0.02%, 50=1.30% 00:13:56.120 cpu : usr=0.91%, sys=1.61%, ctx=4094, majf=0, minf=1 00:13:56.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 issued rwts: total=4091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.120 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2324121: Mon Jul 15 23:17:11 2024 00:13:56.120 read: IOPS=2416, BW=9665KiB/s (9897kB/s)(30.2MiB/3200msec) 00:13:56.120 slat (nsec): min=5883, max=73110, avg=11944.17, stdev=6181.86 00:13:56.120 clat (usec): min=214, max=41416, avg=395.95, stdev=1733.70 00:13:56.120 lat (usec): min=221, max=41426, avg=407.89, stdev=1733.98 00:13:56.120 clat percentiles (usec): 00:13:56.120 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:13:56.120 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 326], 00:13:56.120 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 482], 00:13:56.120 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[41157], 99.95th=[41157], 00:13:56.120 | 99.99th=[41157] 00:13:56.120 bw ( KiB/s): min= 1872, max=13608, per=51.61%, avg=10302.67, stdev=4358.68, samples=6 00:13:56.120 iops : min= 468, max= 3402, avg=2575.67, stdev=1089.67, samples=6 00:13:56.120 lat (usec) : 250=16.77%, 500=79.35%, 750=3.61%, 1000=0.01% 00:13:56.120 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01%, 50=0.18% 00:13:56.120 cpu : usr=1.72%, sys=4.31%, ctx=7733, majf=0, minf=1 00:13:56.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 issued rwts: total=7733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.120 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2324122: Mon Jul 15 23:17:11 2024 00:13:56.120 read: IOPS=296, BW=1184KiB/s (1213kB/s)(3440KiB/2905msec) 00:13:56.120 slat (nsec): min=4694, max=52598, avg=10115.22, stdev=4561.39 00:13:56.120 clat (usec): min=226, max=42033, avg=3338.33, stdev=10717.73 00:13:56.120 lat (usec): min=233, max=42051, avg=3348.44, stdev=10720.68 00:13:56.120 clat percentiles (usec): 00:13:56.120 | 1.00th=[ 237], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 285], 00:13:56.120 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:13:56.120 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[41157], 00:13:56.120 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:56.120 | 99.99th=[42206] 00:13:56.120 bw ( KiB/s): min= 96, max= 120, per=0.50%, avg=100.80, stdev=10.73, samples=5 00:13:56.120 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:13:56.120 lat (usec) : 250=2.79%, 500=88.85%, 750=0.46%, 1000=0.35% 00:13:56.120 lat (msec) : 50=7.43% 00:13:56.120 cpu : usr=0.03%, sys=0.41%, ctx=862, majf=0, minf=1 00:13:56.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.120 issued rwts: total=861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.120 00:13:56.120 Run status group 0 (all jobs): 00:13:56.120 READ: bw=19.5MiB/s (20.4MB/s), 1184KiB/s-9665KiB/s (1213kB/s-9897kB/s), io=72.6MiB (76.1MB), run=2905-3724msec 00:13:56.120 00:13:56.120 Disk stats (read/write): 00:13:56.120 nvme0n1: ios=5623/0, merge=0/0, ticks=3220/0, in_queue=3220, util=95.05% 00:13:56.120 nvme0n2: ios=4087/0, merge=0/0, ticks=3469/0, in_queue=3469, util=95.90% 00:13:56.120 nvme0n3: ios=7730/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.79% 00:13:56.120 nvme0n4: ios=769/0, merge=0/0, ticks=2842/0, in_queue=2842, util=96.75% 00:13:56.120 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.120 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:56.378 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.378 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:56.636 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.636 23:17:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:56.893 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.893 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:57.151 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:57.151 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2324031 00:13:57.151 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:57.151 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:57.408 nvmf hotplug test: fio failed as expected 00:13:57.408 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.665 rmmod nvme_tcp 00:13:57.665 rmmod nvme_fabrics 00:13:57.665 rmmod nvme_keyring 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2322000 ']' 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2322000 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2322000 ']' 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2322000 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2322000 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2322000' 00:13:57.665 killing process with pid 2322000 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2322000 00:13:57.665 23:17:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2322000 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.923 23:17:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.446 23:17:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.446 00:14:00.446 real 0m24.059s 00:14:00.446 user 1m25.222s 00:14:00.446 sys 0m6.607s 00:14:00.446 23:17:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.446 23:17:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.446 ************************************ 00:14:00.446 END TEST nvmf_fio_target 00:14:00.446 ************************************ 00:14:00.446 23:17:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.446 23:17:15 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:00.446 23:17:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.446 23:17:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.446 23:17:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.446 ************************************ 00:14:00.446 START TEST nvmf_bdevio 00:14:00.446 ************************************ 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:00.446 * Looking for test storage... 00:14:00.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.446 23:17:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:02.347 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:02.347 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:02.347 Found net devices under 0000:84:00.0: cvl_0_0 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:02.347 Found net devices under 0000:84:00.1: cvl_0_1 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.347 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:02.348 00:14:02.348 --- 10.0.0.2 ping statistics --- 00:14:02.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.348 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:02.348 00:14:02.348 --- 10.0.0.1 ping statistics --- 00:14:02.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.348 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2326770 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2326770 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2326770 ']' 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.348 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 [2024-07-15 23:17:17.562004] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:14:02.348 [2024-07-15 23:17:17.562104] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.348 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.348 [2024-07-15 23:17:17.626905] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.606 [2024-07-15 23:17:17.741156] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.606 [2024-07-15 23:17:17.741218] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.606 [2024-07-15 23:17:17.741232] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.606 [2024-07-15 23:17:17.741244] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.606 [2024-07-15 23:17:17.741253] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.606 [2024-07-15 23:17:17.741337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.606 [2024-07-15 23:17:17.741400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:02.606 [2024-07-15 23:17:17.741468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:02.606 [2024-07-15 23:17:17.741471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.606 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.607 [2024-07-15 23:17:17.907660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.607 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.607 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.607 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.607 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.865 Malloc0 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.865 [2024-07-15 23:17:17.960047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.865 { 00:14:02.865 "params": { 00:14:02.865 "name": "Nvme$subsystem", 00:14:02.865 "trtype": "$TEST_TRANSPORT", 00:14:02.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.865 "adrfam": "ipv4", 00:14:02.865 "trsvcid": "$NVMF_PORT", 00:14:02.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.865 "hdgst": ${hdgst:-false}, 00:14:02.865 "ddgst": ${ddgst:-false} 00:14:02.865 }, 00:14:02.865 "method": "bdev_nvme_attach_controller" 00:14:02.865 } 00:14:02.865 EOF 00:14:02.865 )") 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:02.865 23:17:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.865 "params": { 00:14:02.865 "name": "Nvme1", 00:14:02.865 "trtype": "tcp", 00:14:02.865 "traddr": "10.0.0.2", 00:14:02.865 "adrfam": "ipv4", 00:14:02.865 "trsvcid": "4420", 00:14:02.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.865 "hdgst": false, 00:14:02.865 "ddgst": false 00:14:02.865 }, 00:14:02.865 "method": "bdev_nvme_attach_controller" 00:14:02.865 }' 00:14:02.865 [2024-07-15 23:17:18.007865] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:14:02.865 [2024-07-15 23:17:18.007943] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326835 ] 00:14:02.865 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.865 [2024-07-15 23:17:18.071433] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.122 [2024-07-15 23:17:18.188674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.122 [2024-07-15 23:17:18.188727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.122 [2024-07-15 23:17:18.188730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.379 I/O targets: 00:14:03.379 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:03.379 00:14:03.379 00:14:03.379 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.379 http://cunit.sourceforge.net/ 00:14:03.379 00:14:03.379 00:14:03.379 Suite: bdevio tests on: Nvme1n1 00:14:03.379 Test: blockdev write read block ...passed 00:14:03.379 Test: blockdev write zeroes read block ...passed 00:14:03.379 Test: blockdev write zeroes read no split ...passed 00:14:03.379 Test: blockdev write zeroes read split ...passed 00:14:03.379 Test: blockdev write zeroes read split partial ...passed 00:14:03.379 Test: blockdev reset ...[2024-07-15 23:17:18.690226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:03.379 [2024-07-15 23:17:18.690348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373dc0 (9): Bad file descriptor 00:14:03.636 [2024-07-15 23:17:18.702234] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.636 passed 00:14:03.636 Test: blockdev write read 8 blocks ...passed 00:14:03.636 Test: blockdev write read size > 128k ...passed 00:14:03.636 Test: blockdev write read invalid size ...passed 00:14:03.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:03.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:03.636 Test: blockdev write read max offset ...passed 00:14:03.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.636 Test: blockdev writev readv 8 blocks ...passed 00:14:03.636 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.894 Test: blockdev writev readv block ...passed 00:14:03.894 Test: blockdev writev readv size > 128k ...passed 00:14:03.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.894 Test: blockdev comparev and writev ...[2024-07-15 23:17:19.000017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.000052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.000076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.000094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.000577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.000601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.000621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.000638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.001108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.001134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.001163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.001181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.001613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.001638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.001659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.894 [2024-07-15 23:17:19.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.894 passed 00:14:03.894 Test: blockdev nvme passthru rw ...passed 00:14:03.894 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:17:19.084123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.894 [2024-07-15 23:17:19.084154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.084379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.894 [2024-07-15 23:17:19.084402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.084578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.894 [2024-07-15 23:17:19.084599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:03.894 [2024-07-15 23:17:19.084772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.894 [2024-07-15 23:17:19.084795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:03.894 passed 00:14:03.894 Test: blockdev nvme admin passthru ...passed 00:14:03.894 Test: blockdev copy ...passed 00:14:03.894 00:14:03.894 Run Summary: Type Total Ran Passed Failed Inactive 00:14:03.895 suites 1 1 n/a 0 0 00:14:03.895 tests 23 23 23 0 0 00:14:03.895 asserts 152 152 152 0 n/a 00:14:03.895 00:14:03.895 Elapsed time = 1.218 seconds 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.152 rmmod nvme_tcp 00:14:04.152 rmmod nvme_fabrics 00:14:04.152 rmmod nvme_keyring 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2326770 ']' 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2326770 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2326770 ']' 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2326770 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326770 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326770' 00:14:04.152 killing process with pid 2326770 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2326770 00:14:04.152 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2326770 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.713 23:17:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.613 23:17:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.613 00:14:06.613 real 0m6.519s 00:14:06.613 user 0m11.276s 00:14:06.613 sys 0m2.109s 00:14:06.613 23:17:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.613 23:17:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.613 ************************************ 00:14:06.613 END TEST nvmf_bdevio 00:14:06.613 ************************************ 00:14:06.613 23:17:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.613 23:17:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.613 23:17:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.613 23:17:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.613 23:17:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.613 ************************************ 00:14:06.613 START TEST nvmf_auth_target 00:14:06.613 ************************************ 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.613 * Looking for test storage... 00:14:06.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.613 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.871 23:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:08.797 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.797 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:08.798 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:08.798 Found net devices under 0000:84:00.0: cvl_0_0 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:08.798 Found net devices under 0000:84:00.1: cvl_0_1 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.798 23:17:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:14:08.798 00:14:08.798 --- 10.0.0.2 ping statistics --- 00:14:08.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.798 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:08.798 00:14:08.798 --- 10.0.0.1 ping statistics --- 00:14:08.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.798 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2329000 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2329000 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2329000 ']' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.798 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2329023 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0c583e223f442021b1e704f30995adf50b6af4cf76f8b8e 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EJD 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0c583e223f442021b1e704f30995adf50b6af4cf76f8b8e 0 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0c583e223f442021b1e704f30995adf50b6af4cf76f8b8e 0 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0c583e223f442021b1e704f30995adf50b6af4cf76f8b8e 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EJD 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EJD 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.EJD 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb7cab705a24627a104f9cc8808a761cf57ca661fc47fd111a1e9e0c55b9bf81 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.il1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb7cab705a24627a104f9cc8808a761cf57ca661fc47fd111a1e9e0c55b9bf81 3 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb7cab705a24627a104f9cc8808a761cf57ca661fc47fd111a1e9e0c55b9bf81 3 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb7cab705a24627a104f9cc8808a761cf57ca661fc47fd111a1e9e0c55b9bf81 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.il1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.il1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.il1 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.365 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=74e49ed6b369527e82073f8afa3b5951 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ngs 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 74e49ed6b369527e82073f8afa3b5951 1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 74e49ed6b369527e82073f8afa3b5951 1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=74e49ed6b369527e82073f8afa3b5951 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ngs 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ngs 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Ngs 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cebe6c23d5f8e79eb494dd3a72f6483825c2f3f8a5ecfe2f 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.n7Z 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cebe6c23d5f8e79eb494dd3a72f6483825c2f3f8a5ecfe2f 2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cebe6c23d5f8e79eb494dd3a72f6483825c2f3f8a5ecfe2f 2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cebe6c23d5f8e79eb494dd3a72f6483825c2f3f8a5ecfe2f 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.n7Z 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.n7Z 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.n7Z 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b993c22af7fd6a0ffb39a5d69a3d5980a68e4e00546bfbd7 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RIG 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b993c22af7fd6a0ffb39a5d69a3d5980a68e4e00546bfbd7 2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b993c22af7fd6a0ffb39a5d69a3d5980a68e4e00546bfbd7 2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b993c22af7fd6a0ffb39a5d69a3d5980a68e4e00546bfbd7 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:09.366 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RIG 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RIG 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.RIG 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc0940aee4fa038a0806fc926580162b 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hgl 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc0940aee4fa038a0806fc926580162b 1 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc0940aee4fa038a0806fc926580162b 1 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc0940aee4fa038a0806fc926580162b 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hgl 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hgl 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.hgl 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e787d9b45cfa7f2fe310ea95ba70b9fd7847a39aef720a40ffaa75ced25c6451 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7U6 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e787d9b45cfa7f2fe310ea95ba70b9fd7847a39aef720a40ffaa75ced25c6451 3 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e787d9b45cfa7f2fe310ea95ba70b9fd7847a39aef720a40ffaa75ced25c6451 3 00:14:09.624 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e787d9b45cfa7f2fe310ea95ba70b9fd7847a39aef720a40ffaa75ced25c6451 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7U6 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7U6 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.7U6 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2329000 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2329000 ']' 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.625 23:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2329023 /var/tmp/host.sock 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2329023 ']' 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:09.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.882 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EJD 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.EJD 00:14:10.140 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.EJD 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.il1 ]] 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.il1 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.il1 00:14:10.398 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.il1 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ngs 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ngs 00:14:10.656 23:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ngs 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.n7Z ]] 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n7Z 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n7Z 00:14:10.914 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n7Z 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RIG 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RIG 00:14:11.172 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RIG 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.hgl ]] 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hgl 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hgl 00:14:11.429 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hgl 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7U6 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7U6 00:14:11.686 23:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7U6 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:11.943 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.200 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.457 00:14:12.457 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.457 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.457 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.715 { 00:14:12.715 "cntlid": 1, 00:14:12.715 "qid": 0, 00:14:12.715 "state": "enabled", 00:14:12.715 "thread": "nvmf_tgt_poll_group_000", 00:14:12.715 "listen_address": { 00:14:12.715 "trtype": "TCP", 00:14:12.715 "adrfam": "IPv4", 00:14:12.715 "traddr": "10.0.0.2", 00:14:12.715 "trsvcid": "4420" 00:14:12.715 }, 00:14:12.715 "peer_address": { 00:14:12.715 "trtype": "TCP", 00:14:12.715 "adrfam": "IPv4", 00:14:12.715 "traddr": "10.0.0.1", 00:14:12.715 "trsvcid": "43306" 00:14:12.715 }, 00:14:12.715 "auth": { 00:14:12.715 "state": "completed", 00:14:12.715 "digest": "sha256", 00:14:12.715 "dhgroup": "null" 00:14:12.715 } 00:14:12.715 } 00:14:12.715 ]' 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.715 23:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.971 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:12.971 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.971 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.971 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.971 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.229 23:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.159 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.416 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.672 00:14:14.672 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.672 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.672 23:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.929 { 00:14:14.929 "cntlid": 3, 00:14:14.929 "qid": 0, 00:14:14.929 "state": "enabled", 00:14:14.929 "thread": "nvmf_tgt_poll_group_000", 00:14:14.929 "listen_address": { 00:14:14.929 "trtype": "TCP", 00:14:14.929 "adrfam": "IPv4", 00:14:14.929 "traddr": "10.0.0.2", 00:14:14.929 "trsvcid": "4420" 00:14:14.929 }, 00:14:14.929 "peer_address": { 00:14:14.929 "trtype": "TCP", 00:14:14.929 "adrfam": "IPv4", 00:14:14.929 "traddr": "10.0.0.1", 00:14:14.929 "trsvcid": "43342" 00:14:14.929 }, 00:14:14.929 "auth": { 00:14:14.929 "state": "completed", 00:14:14.929 "digest": "sha256", 00:14:14.929 "dhgroup": "null" 00:14:14.929 } 00:14:14.929 } 00:14:14.929 ]' 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.929 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.186 23:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:14:16.136 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.136 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:16.136 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.136 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.392 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.392 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.392 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.392 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.648 23:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.905 00:14:16.905 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.905 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.905 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.163 { 00:14:17.163 "cntlid": 5, 00:14:17.163 "qid": 0, 00:14:17.163 "state": "enabled", 00:14:17.163 "thread": "nvmf_tgt_poll_group_000", 00:14:17.163 "listen_address": { 00:14:17.163 "trtype": "TCP", 00:14:17.163 "adrfam": "IPv4", 00:14:17.163 "traddr": "10.0.0.2", 00:14:17.163 "trsvcid": "4420" 00:14:17.163 }, 00:14:17.163 "peer_address": { 00:14:17.163 "trtype": "TCP", 00:14:17.163 "adrfam": "IPv4", 00:14:17.163 "traddr": "10.0.0.1", 00:14:17.163 "trsvcid": "54286" 00:14:17.163 }, 00:14:17.163 "auth": { 00:14:17.163 "state": "completed", 00:14:17.163 "digest": "sha256", 00:14:17.163 "dhgroup": "null" 00:14:17.163 } 00:14:17.163 } 00:14:17.163 ]' 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.163 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.730 23:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.661 23:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.917 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.918 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.175 00:14:19.175 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.175 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.175 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.458 { 00:14:19.458 "cntlid": 7, 00:14:19.458 "qid": 0, 00:14:19.458 "state": "enabled", 00:14:19.458 "thread": "nvmf_tgt_poll_group_000", 00:14:19.458 "listen_address": { 00:14:19.458 "trtype": "TCP", 00:14:19.458 "adrfam": "IPv4", 00:14:19.458 "traddr": "10.0.0.2", 00:14:19.458 "trsvcid": "4420" 00:14:19.458 }, 00:14:19.458 "peer_address": { 00:14:19.458 "trtype": "TCP", 00:14:19.458 "adrfam": "IPv4", 00:14:19.458 "traddr": "10.0.0.1", 00:14:19.458 "trsvcid": "54308" 00:14:19.458 }, 00:14:19.458 "auth": { 00:14:19.458 "state": "completed", 00:14:19.458 "digest": "sha256", 00:14:19.458 "dhgroup": "null" 00:14:19.458 } 00:14:19.458 } 00:14:19.458 ]' 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.458 23:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.749 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.681 23:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.939 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.504 00:14:21.504 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.504 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.504 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.762 { 00:14:21.762 "cntlid": 9, 00:14:21.762 "qid": 0, 00:14:21.762 "state": "enabled", 00:14:21.762 "thread": "nvmf_tgt_poll_group_000", 00:14:21.762 "listen_address": { 00:14:21.762 "trtype": "TCP", 00:14:21.762 "adrfam": "IPv4", 00:14:21.762 "traddr": "10.0.0.2", 00:14:21.762 "trsvcid": "4420" 00:14:21.762 }, 00:14:21.762 "peer_address": { 00:14:21.762 "trtype": "TCP", 00:14:21.762 "adrfam": "IPv4", 00:14:21.762 "traddr": "10.0.0.1", 00:14:21.762 "trsvcid": "54326" 00:14:21.762 }, 00:14:21.762 "auth": { 00:14:21.762 "state": "completed", 00:14:21.762 "digest": "sha256", 00:14:21.762 "dhgroup": "ffdhe2048" 00:14:21.762 } 00:14:21.762 } 00:14:21.762 ]' 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.762 23:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.019 23:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.953 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.210 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.775 00:14:23.775 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.775 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.775 23:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.775 { 00:14:23.775 "cntlid": 11, 00:14:23.775 "qid": 0, 00:14:23.775 "state": "enabled", 00:14:23.775 "thread": "nvmf_tgt_poll_group_000", 00:14:23.775 "listen_address": { 00:14:23.775 "trtype": "TCP", 00:14:23.775 "adrfam": "IPv4", 00:14:23.775 "traddr": "10.0.0.2", 00:14:23.775 "trsvcid": "4420" 00:14:23.775 }, 00:14:23.775 "peer_address": { 00:14:23.775 "trtype": "TCP", 00:14:23.775 "adrfam": "IPv4", 00:14:23.775 "traddr": "10.0.0.1", 00:14:23.775 "trsvcid": "54368" 00:14:23.775 }, 00:14:23.775 "auth": { 00:14:23.775 "state": "completed", 00:14:23.775 "digest": "sha256", 00:14:23.775 "dhgroup": "ffdhe2048" 00:14:23.775 } 00:14:23.775 } 00:14:23.775 ]' 00:14:23.775 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.033 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.291 23:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.222 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.479 23:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.044 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.044 { 00:14:26.044 "cntlid": 13, 00:14:26.044 "qid": 0, 00:14:26.044 "state": "enabled", 00:14:26.044 "thread": "nvmf_tgt_poll_group_000", 00:14:26.044 "listen_address": { 00:14:26.044 "trtype": "TCP", 00:14:26.044 "adrfam": "IPv4", 00:14:26.044 "traddr": "10.0.0.2", 00:14:26.044 "trsvcid": "4420" 00:14:26.044 }, 00:14:26.044 "peer_address": { 00:14:26.044 "trtype": "TCP", 00:14:26.044 "adrfam": "IPv4", 00:14:26.044 "traddr": "10.0.0.1", 00:14:26.044 "trsvcid": "60824" 00:14:26.044 }, 00:14:26.044 "auth": { 00:14:26.044 "state": "completed", 00:14:26.044 "digest": "sha256", 00:14:26.044 "dhgroup": "ffdhe2048" 00:14:26.044 } 00:14:26.044 } 00:14:26.044 ]' 00:14:26.044 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.301 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.558 23:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.489 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.747 23:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.004 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.261 23:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.518 { 00:14:28.518 "cntlid": 15, 00:14:28.518 "qid": 0, 00:14:28.518 "state": "enabled", 00:14:28.518 "thread": "nvmf_tgt_poll_group_000", 00:14:28.518 "listen_address": { 00:14:28.518 "trtype": "TCP", 00:14:28.518 "adrfam": "IPv4", 00:14:28.518 "traddr": "10.0.0.2", 00:14:28.518 "trsvcid": "4420" 00:14:28.518 }, 00:14:28.518 "peer_address": { 00:14:28.518 "trtype": "TCP", 00:14:28.518 "adrfam": "IPv4", 00:14:28.518 "traddr": "10.0.0.1", 00:14:28.518 "trsvcid": "60852" 00:14:28.518 }, 00:14:28.518 "auth": { 00:14:28.518 "state": "completed", 00:14:28.518 "digest": "sha256", 00:14:28.518 "dhgroup": "ffdhe2048" 00:14:28.518 } 00:14:28.518 } 00:14:28.518 ]' 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.518 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.774 23:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.704 23:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.960 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.961 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.524 00:14:30.524 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.524 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.524 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.780 { 00:14:30.780 "cntlid": 17, 00:14:30.780 "qid": 0, 00:14:30.780 "state": "enabled", 00:14:30.780 "thread": "nvmf_tgt_poll_group_000", 00:14:30.780 "listen_address": { 00:14:30.780 "trtype": "TCP", 00:14:30.780 "adrfam": "IPv4", 00:14:30.780 "traddr": "10.0.0.2", 00:14:30.780 "trsvcid": "4420" 00:14:30.780 }, 00:14:30.780 "peer_address": { 00:14:30.780 "trtype": "TCP", 00:14:30.780 "adrfam": "IPv4", 00:14:30.780 "traddr": "10.0.0.1", 00:14:30.780 "trsvcid": "60884" 00:14:30.780 }, 00:14:30.780 "auth": { 00:14:30.780 "state": "completed", 00:14:30.780 "digest": "sha256", 00:14:30.780 "dhgroup": "ffdhe3072" 00:14:30.780 } 00:14:30.780 } 00:14:30.780 ]' 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.780 23:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.037 23:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.965 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.222 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.785 00:14:32.785 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.785 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.785 23:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.041 { 00:14:33.041 "cntlid": 19, 00:14:33.041 "qid": 0, 00:14:33.041 "state": "enabled", 00:14:33.041 "thread": "nvmf_tgt_poll_group_000", 00:14:33.041 "listen_address": { 00:14:33.041 "trtype": "TCP", 00:14:33.041 "adrfam": "IPv4", 00:14:33.041 "traddr": "10.0.0.2", 00:14:33.041 "trsvcid": "4420" 00:14:33.041 }, 00:14:33.041 "peer_address": { 00:14:33.041 "trtype": "TCP", 00:14:33.041 "adrfam": "IPv4", 00:14:33.041 "traddr": "10.0.0.1", 00:14:33.041 "trsvcid": "60920" 00:14:33.041 }, 00:14:33.041 "auth": { 00:14:33.041 "state": "completed", 00:14:33.041 "digest": "sha256", 00:14:33.041 "dhgroup": "ffdhe3072" 00:14:33.041 } 00:14:33.041 } 00:14:33.041 ]' 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.041 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.297 23:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:14:34.229 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.230 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.486 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.744 23:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.744 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.744 23:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.000 00:14:35.000 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.000 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.000 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.257 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.258 { 00:14:35.258 "cntlid": 21, 00:14:35.258 "qid": 0, 00:14:35.258 "state": "enabled", 00:14:35.258 "thread": "nvmf_tgt_poll_group_000", 00:14:35.258 "listen_address": { 00:14:35.258 "trtype": "TCP", 00:14:35.258 "adrfam": "IPv4", 00:14:35.258 "traddr": "10.0.0.2", 00:14:35.258 "trsvcid": "4420" 00:14:35.258 }, 00:14:35.258 "peer_address": { 00:14:35.258 "trtype": "TCP", 00:14:35.258 "adrfam": "IPv4", 00:14:35.258 "traddr": "10.0.0.1", 00:14:35.258 "trsvcid": "60956" 00:14:35.258 }, 00:14:35.258 "auth": { 00:14:35.258 "state": "completed", 00:14:35.258 "digest": "sha256", 00:14:35.258 "dhgroup": "ffdhe3072" 00:14:35.258 } 00:14:35.258 } 00:14:35.258 ]' 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.258 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.515 23:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:14:36.446 23:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.446 23:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:36.446 23:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.446 23:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.446 23:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.704 23:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.704 23:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.704 23:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.704 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.962 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.219 00:14:37.219 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.219 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.219 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.477 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.477 { 00:14:37.477 "cntlid": 23, 00:14:37.477 "qid": 0, 00:14:37.477 "state": "enabled", 00:14:37.477 "thread": "nvmf_tgt_poll_group_000", 00:14:37.477 "listen_address": { 00:14:37.477 "trtype": "TCP", 00:14:37.477 "adrfam": "IPv4", 00:14:37.477 "traddr": "10.0.0.2", 00:14:37.477 "trsvcid": "4420" 00:14:37.477 }, 00:14:37.477 "peer_address": { 00:14:37.477 "trtype": "TCP", 00:14:37.477 "adrfam": "IPv4", 00:14:37.477 "traddr": "10.0.0.1", 00:14:37.477 "trsvcid": "37522" 00:14:37.477 }, 00:14:37.478 "auth": { 00:14:37.478 "state": "completed", 00:14:37.478 "digest": "sha256", 00:14:37.478 "dhgroup": "ffdhe3072" 00:14:37.478 } 00:14:37.478 } 00:14:37.478 ]' 00:14:37.478 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.478 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.478 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.478 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.478 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.735 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.735 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.735 23:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.991 23:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.922 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.179 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.435 00:14:39.691 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.691 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.691 23:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.949 { 00:14:39.949 "cntlid": 25, 00:14:39.949 "qid": 0, 00:14:39.949 "state": "enabled", 00:14:39.949 "thread": "nvmf_tgt_poll_group_000", 00:14:39.949 "listen_address": { 00:14:39.949 "trtype": "TCP", 00:14:39.949 "adrfam": "IPv4", 00:14:39.949 "traddr": "10.0.0.2", 00:14:39.949 "trsvcid": "4420" 00:14:39.949 }, 00:14:39.949 "peer_address": { 00:14:39.949 "trtype": "TCP", 00:14:39.949 "adrfam": "IPv4", 00:14:39.949 "traddr": "10.0.0.1", 00:14:39.949 "trsvcid": "37550" 00:14:39.949 }, 00:14:39.949 "auth": { 00:14:39.949 "state": "completed", 00:14:39.949 "digest": "sha256", 00:14:39.949 "dhgroup": "ffdhe4096" 00:14:39.949 } 00:14:39.949 } 00:14:39.949 ]' 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.949 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.207 23:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.139 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.399 23:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.985 00:14:41.985 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.985 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.985 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.278 { 00:14:42.278 "cntlid": 27, 00:14:42.278 "qid": 0, 00:14:42.278 "state": "enabled", 00:14:42.278 "thread": "nvmf_tgt_poll_group_000", 00:14:42.278 "listen_address": { 00:14:42.278 "trtype": "TCP", 00:14:42.278 "adrfam": "IPv4", 00:14:42.278 "traddr": "10.0.0.2", 00:14:42.278 "trsvcid": "4420" 00:14:42.278 }, 00:14:42.278 "peer_address": { 00:14:42.278 "trtype": "TCP", 00:14:42.278 "adrfam": "IPv4", 00:14:42.278 "traddr": "10.0.0.1", 00:14:42.278 "trsvcid": "37564" 00:14:42.278 }, 00:14:42.278 "auth": { 00:14:42.278 "state": "completed", 00:14:42.278 "digest": "sha256", 00:14:42.278 "dhgroup": "ffdhe4096" 00:14:42.278 } 00:14:42.278 } 00:14:42.278 ]' 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.278 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.536 23:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.468 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.726 23:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.290 00:14:44.290 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.290 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.290 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.548 { 00:14:44.548 "cntlid": 29, 00:14:44.548 "qid": 0, 00:14:44.548 "state": "enabled", 00:14:44.548 "thread": "nvmf_tgt_poll_group_000", 00:14:44.548 "listen_address": { 00:14:44.548 "trtype": "TCP", 00:14:44.548 "adrfam": "IPv4", 00:14:44.548 "traddr": "10.0.0.2", 00:14:44.548 "trsvcid": "4420" 00:14:44.548 }, 00:14:44.548 "peer_address": { 00:14:44.548 "trtype": "TCP", 00:14:44.548 "adrfam": "IPv4", 00:14:44.548 "traddr": "10.0.0.1", 00:14:44.548 "trsvcid": "37590" 00:14:44.548 }, 00:14:44.548 "auth": { 00:14:44.548 "state": "completed", 00:14:44.548 "digest": "sha256", 00:14:44.548 "dhgroup": "ffdhe4096" 00:14:44.548 } 00:14:44.548 } 00:14:44.548 ]' 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.548 23:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.806 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.739 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.740 23:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.997 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.563 00:14:46.563 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.563 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.563 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.821 { 00:14:46.821 "cntlid": 31, 00:14:46.821 "qid": 0, 00:14:46.821 "state": "enabled", 00:14:46.821 "thread": "nvmf_tgt_poll_group_000", 00:14:46.821 "listen_address": { 00:14:46.821 "trtype": "TCP", 00:14:46.821 "adrfam": "IPv4", 00:14:46.821 "traddr": "10.0.0.2", 00:14:46.821 "trsvcid": "4420" 00:14:46.821 }, 00:14:46.821 "peer_address": { 00:14:46.821 "trtype": "TCP", 00:14:46.821 "adrfam": "IPv4", 00:14:46.821 "traddr": "10.0.0.1", 00:14:46.821 "trsvcid": "33842" 00:14:46.821 }, 00:14:46.821 "auth": { 00:14:46.821 "state": "completed", 00:14:46.821 "digest": "sha256", 00:14:46.821 "dhgroup": "ffdhe4096" 00:14:46.821 } 00:14:46.821 } 00:14:46.821 ]' 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.821 23:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.821 23:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.821 23:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.821 23:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.077 23:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.008 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.268 23:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.832 00:14:48.832 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.832 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.832 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.094 { 00:14:49.094 "cntlid": 33, 00:14:49.094 "qid": 0, 00:14:49.094 "state": "enabled", 00:14:49.094 "thread": "nvmf_tgt_poll_group_000", 00:14:49.094 "listen_address": { 00:14:49.094 "trtype": "TCP", 00:14:49.094 "adrfam": "IPv4", 00:14:49.094 "traddr": "10.0.0.2", 00:14:49.094 "trsvcid": "4420" 00:14:49.094 }, 00:14:49.094 "peer_address": { 00:14:49.094 "trtype": "TCP", 00:14:49.094 "adrfam": "IPv4", 00:14:49.094 "traddr": "10.0.0.1", 00:14:49.094 "trsvcid": "33862" 00:14:49.094 }, 00:14:49.094 "auth": { 00:14:49.094 "state": "completed", 00:14:49.094 "digest": "sha256", 00:14:49.094 "dhgroup": "ffdhe6144" 00:14:49.094 } 00:14:49.094 } 00:14:49.094 ]' 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.094 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.361 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.361 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.361 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.361 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.361 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.618 23:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.548 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.806 23:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.368 00:14:51.368 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.369 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.369 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.625 { 00:14:51.625 "cntlid": 35, 00:14:51.625 "qid": 0, 00:14:51.625 "state": "enabled", 00:14:51.625 "thread": "nvmf_tgt_poll_group_000", 00:14:51.625 "listen_address": { 00:14:51.625 "trtype": "TCP", 00:14:51.625 "adrfam": "IPv4", 00:14:51.625 "traddr": "10.0.0.2", 00:14:51.625 "trsvcid": "4420" 00:14:51.625 }, 00:14:51.625 "peer_address": { 00:14:51.625 "trtype": "TCP", 00:14:51.625 "adrfam": "IPv4", 00:14:51.625 "traddr": "10.0.0.1", 00:14:51.625 "trsvcid": "33886" 00:14:51.625 }, 00:14:51.625 "auth": { 00:14:51.625 "state": "completed", 00:14:51.625 "digest": "sha256", 00:14:51.625 "dhgroup": "ffdhe6144" 00:14:51.625 } 00:14:51.625 } 00:14:51.625 ]' 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.625 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.881 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.881 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.881 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.881 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.882 23:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.138 23:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.069 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.326 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.327 23:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.327 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.327 23:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.891 00:14:53.891 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.891 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.891 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.150 { 00:14:54.150 "cntlid": 37, 00:14:54.150 "qid": 0, 00:14:54.150 "state": "enabled", 00:14:54.150 "thread": "nvmf_tgt_poll_group_000", 00:14:54.150 "listen_address": { 00:14:54.150 "trtype": "TCP", 00:14:54.150 "adrfam": "IPv4", 00:14:54.150 "traddr": "10.0.0.2", 00:14:54.150 "trsvcid": "4420" 00:14:54.150 }, 00:14:54.150 "peer_address": { 00:14:54.150 "trtype": "TCP", 00:14:54.150 "adrfam": "IPv4", 00:14:54.150 "traddr": "10.0.0.1", 00:14:54.150 "trsvcid": "33906" 00:14:54.150 }, 00:14:54.150 "auth": { 00:14:54.150 "state": "completed", 00:14:54.150 "digest": "sha256", 00:14:54.150 "dhgroup": "ffdhe6144" 00:14:54.150 } 00:14:54.150 } 00:14:54.150 ]' 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.150 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.408 23:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.340 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.597 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.856 23:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.856 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.856 23:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.430 00:14:56.430 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.430 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.430 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.688 { 00:14:56.688 "cntlid": 39, 00:14:56.688 "qid": 0, 00:14:56.688 "state": "enabled", 00:14:56.688 "thread": "nvmf_tgt_poll_group_000", 00:14:56.688 "listen_address": { 00:14:56.688 "trtype": "TCP", 00:14:56.688 "adrfam": "IPv4", 00:14:56.688 "traddr": "10.0.0.2", 00:14:56.688 "trsvcid": "4420" 00:14:56.688 }, 00:14:56.688 "peer_address": { 00:14:56.688 "trtype": "TCP", 00:14:56.688 "adrfam": "IPv4", 00:14:56.688 "traddr": "10.0.0.1", 00:14:56.688 "trsvcid": "58738" 00:14:56.688 }, 00:14:56.688 "auth": { 00:14:56.688 "state": "completed", 00:14:56.688 "digest": "sha256", 00:14:56.688 "dhgroup": "ffdhe6144" 00:14:56.688 } 00:14:56.688 } 00:14:56.688 ]' 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.688 23:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.945 23:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.877 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.135 23:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.065 00:14:59.065 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.065 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.065 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.322 { 00:14:59.322 "cntlid": 41, 00:14:59.322 "qid": 0, 00:14:59.322 "state": "enabled", 00:14:59.322 "thread": "nvmf_tgt_poll_group_000", 00:14:59.322 "listen_address": { 00:14:59.322 "trtype": "TCP", 00:14:59.322 "adrfam": "IPv4", 00:14:59.322 "traddr": "10.0.0.2", 00:14:59.322 "trsvcid": "4420" 00:14:59.322 }, 00:14:59.322 "peer_address": { 00:14:59.322 "trtype": "TCP", 00:14:59.322 "adrfam": "IPv4", 00:14:59.322 "traddr": "10.0.0.1", 00:14:59.322 "trsvcid": "58772" 00:14:59.322 }, 00:14:59.322 "auth": { 00:14:59.322 "state": "completed", 00:14:59.322 "digest": "sha256", 00:14:59.322 "dhgroup": "ffdhe8192" 00:14:59.322 } 00:14:59.322 } 00:14:59.322 ]' 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.322 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.580 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.580 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.580 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.838 23:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.771 23:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.028 23:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.960 00:15:01.960 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.960 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.960 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.217 { 00:15:02.217 "cntlid": 43, 00:15:02.217 "qid": 0, 00:15:02.217 "state": "enabled", 00:15:02.217 "thread": "nvmf_tgt_poll_group_000", 00:15:02.217 "listen_address": { 00:15:02.217 "trtype": "TCP", 00:15:02.217 "adrfam": "IPv4", 00:15:02.217 "traddr": "10.0.0.2", 00:15:02.217 "trsvcid": "4420" 00:15:02.217 }, 00:15:02.217 "peer_address": { 00:15:02.217 "trtype": "TCP", 00:15:02.217 "adrfam": "IPv4", 00:15:02.217 "traddr": "10.0.0.1", 00:15:02.217 "trsvcid": "58800" 00:15:02.217 }, 00:15:02.217 "auth": { 00:15:02.217 "state": "completed", 00:15:02.217 "digest": "sha256", 00:15:02.217 "dhgroup": "ffdhe8192" 00:15:02.217 } 00:15:02.217 } 00:15:02.217 ]' 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.217 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.475 23:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.408 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.665 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.923 23:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.857 00:15:04.858 23:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.858 23:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.858 23:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.858 { 00:15:04.858 "cntlid": 45, 00:15:04.858 "qid": 0, 00:15:04.858 "state": "enabled", 00:15:04.858 "thread": "nvmf_tgt_poll_group_000", 00:15:04.858 "listen_address": { 00:15:04.858 "trtype": "TCP", 00:15:04.858 "adrfam": "IPv4", 00:15:04.858 "traddr": "10.0.0.2", 00:15:04.858 "trsvcid": "4420" 00:15:04.858 }, 00:15:04.858 "peer_address": { 00:15:04.858 "trtype": "TCP", 00:15:04.858 "adrfam": "IPv4", 00:15:04.858 "traddr": "10.0.0.1", 00:15:04.858 "trsvcid": "58828" 00:15:04.858 }, 00:15:04.858 "auth": { 00:15:04.858 "state": "completed", 00:15:04.858 "digest": "sha256", 00:15:04.858 "dhgroup": "ffdhe8192" 00:15:04.858 } 00:15:04.858 } 00:15:04.858 ]' 00:15:04.858 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.142 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.406 23:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:06.337 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.338 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.595 23:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.527 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.527 { 00:15:07.527 "cntlid": 47, 00:15:07.527 "qid": 0, 00:15:07.527 "state": "enabled", 00:15:07.527 "thread": "nvmf_tgt_poll_group_000", 00:15:07.527 "listen_address": { 00:15:07.527 "trtype": "TCP", 00:15:07.527 "adrfam": "IPv4", 00:15:07.527 "traddr": "10.0.0.2", 00:15:07.527 "trsvcid": "4420" 00:15:07.527 }, 00:15:07.527 "peer_address": { 00:15:07.527 "trtype": "TCP", 00:15:07.527 "adrfam": "IPv4", 00:15:07.527 "traddr": "10.0.0.1", 00:15:07.527 "trsvcid": "55144" 00:15:07.527 }, 00:15:07.527 "auth": { 00:15:07.527 "state": "completed", 00:15:07.527 "digest": "sha256", 00:15:07.527 "dhgroup": "ffdhe8192" 00:15:07.527 } 00:15:07.527 } 00:15:07.527 ]' 00:15:07.527 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.784 23:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.042 23:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.971 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.972 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.972 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.228 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.484 00:15:09.484 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.484 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.484 23:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.741 { 00:15:09.741 "cntlid": 49, 00:15:09.741 "qid": 0, 00:15:09.741 "state": "enabled", 00:15:09.741 "thread": "nvmf_tgt_poll_group_000", 00:15:09.741 "listen_address": { 00:15:09.741 "trtype": "TCP", 00:15:09.741 "adrfam": "IPv4", 00:15:09.741 "traddr": "10.0.0.2", 00:15:09.741 "trsvcid": "4420" 00:15:09.741 }, 00:15:09.741 "peer_address": { 00:15:09.741 "trtype": "TCP", 00:15:09.741 "adrfam": "IPv4", 00:15:09.741 "traddr": "10.0.0.1", 00:15:09.741 "trsvcid": "55154" 00:15:09.741 }, 00:15:09.741 "auth": { 00:15:09.741 "state": "completed", 00:15:09.741 "digest": "sha384", 00:15:09.741 "dhgroup": "null" 00:15:09.741 } 00:15:09.741 } 00:15:09.741 ]' 00:15:09.741 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.998 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.255 23:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.186 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.187 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.444 23:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.007 00:15:12.007 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.007 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.007 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.264 { 00:15:12.264 "cntlid": 51, 00:15:12.264 "qid": 0, 00:15:12.264 "state": "enabled", 00:15:12.264 "thread": "nvmf_tgt_poll_group_000", 00:15:12.264 "listen_address": { 00:15:12.264 "trtype": "TCP", 00:15:12.264 "adrfam": "IPv4", 00:15:12.264 "traddr": "10.0.0.2", 00:15:12.264 "trsvcid": "4420" 00:15:12.264 }, 00:15:12.264 "peer_address": { 00:15:12.264 "trtype": "TCP", 00:15:12.264 "adrfam": "IPv4", 00:15:12.264 "traddr": "10.0.0.1", 00:15:12.264 "trsvcid": "55180" 00:15:12.264 }, 00:15:12.264 "auth": { 00:15:12.264 "state": "completed", 00:15:12.264 "digest": "sha384", 00:15:12.264 "dhgroup": "null" 00:15:12.264 } 00:15:12.264 } 00:15:12.264 ]' 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.264 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.521 23:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.451 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.708 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.709 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.709 23:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.709 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.709 23:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.966 00:15:13.966 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.966 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.966 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.224 { 00:15:14.224 "cntlid": 53, 00:15:14.224 "qid": 0, 00:15:14.224 "state": "enabled", 00:15:14.224 "thread": "nvmf_tgt_poll_group_000", 00:15:14.224 "listen_address": { 00:15:14.224 "trtype": "TCP", 00:15:14.224 "adrfam": "IPv4", 00:15:14.224 "traddr": "10.0.0.2", 00:15:14.224 "trsvcid": "4420" 00:15:14.224 }, 00:15:14.224 "peer_address": { 00:15:14.224 "trtype": "TCP", 00:15:14.224 "adrfam": "IPv4", 00:15:14.224 "traddr": "10.0.0.1", 00:15:14.224 "trsvcid": "55192" 00:15:14.224 }, 00:15:14.224 "auth": { 00:15:14.224 "state": "completed", 00:15:14.224 "digest": "sha384", 00:15:14.224 "dhgroup": "null" 00:15:14.224 } 00:15:14.224 } 00:15:14.224 ]' 00:15:14.224 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.481 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.740 23:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.671 23:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.928 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.184 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.442 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.443 23:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.443 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.443 { 00:15:16.443 "cntlid": 55, 00:15:16.443 "qid": 0, 00:15:16.443 "state": "enabled", 00:15:16.443 "thread": "nvmf_tgt_poll_group_000", 00:15:16.443 "listen_address": { 00:15:16.443 "trtype": "TCP", 00:15:16.443 "adrfam": "IPv4", 00:15:16.443 "traddr": "10.0.0.2", 00:15:16.443 "trsvcid": "4420" 00:15:16.443 }, 00:15:16.443 "peer_address": { 00:15:16.443 "trtype": "TCP", 00:15:16.443 "adrfam": "IPv4", 00:15:16.443 "traddr": "10.0.0.1", 00:15:16.443 "trsvcid": "40184" 00:15:16.443 }, 00:15:16.443 "auth": { 00:15:16.443 "state": "completed", 00:15:16.443 "digest": "sha384", 00:15:16.443 "dhgroup": "null" 00:15:16.443 } 00:15:16.443 } 00:15:16.443 ]' 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.701 23:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.959 23:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.890 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.154 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.410 00:15:18.410 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.410 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.410 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.667 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.668 { 00:15:18.668 "cntlid": 57, 00:15:18.668 "qid": 0, 00:15:18.668 "state": "enabled", 00:15:18.668 "thread": "nvmf_tgt_poll_group_000", 00:15:18.668 "listen_address": { 00:15:18.668 "trtype": "TCP", 00:15:18.668 "adrfam": "IPv4", 00:15:18.668 "traddr": "10.0.0.2", 00:15:18.668 "trsvcid": "4420" 00:15:18.668 }, 00:15:18.668 "peer_address": { 00:15:18.668 "trtype": "TCP", 00:15:18.668 "adrfam": "IPv4", 00:15:18.668 "traddr": "10.0.0.1", 00:15:18.668 "trsvcid": "40206" 00:15:18.668 }, 00:15:18.668 "auth": { 00:15:18.668 "state": "completed", 00:15:18.668 "digest": "sha384", 00:15:18.668 "dhgroup": "ffdhe2048" 00:15:18.668 } 00:15:18.668 } 00:15:18.668 ]' 00:15:18.668 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.925 23:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.925 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.925 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.925 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.925 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.925 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.926 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.183 23:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.115 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.371 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.936 00:15:20.936 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.936 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.936 23:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.936 { 00:15:20.936 "cntlid": 59, 00:15:20.936 "qid": 0, 00:15:20.936 "state": "enabled", 00:15:20.936 "thread": "nvmf_tgt_poll_group_000", 00:15:20.936 "listen_address": { 00:15:20.936 "trtype": "TCP", 00:15:20.936 "adrfam": "IPv4", 00:15:20.936 "traddr": "10.0.0.2", 00:15:20.936 "trsvcid": "4420" 00:15:20.936 }, 00:15:20.936 "peer_address": { 00:15:20.936 "trtype": "TCP", 00:15:20.936 "adrfam": "IPv4", 00:15:20.936 "traddr": "10.0.0.1", 00:15:20.936 "trsvcid": "40236" 00:15:20.936 }, 00:15:20.936 "auth": { 00:15:20.936 "state": "completed", 00:15:20.936 "digest": "sha384", 00:15:20.936 "dhgroup": "ffdhe2048" 00:15:20.936 } 00:15:20.936 } 00:15:20.936 ]' 00:15:20.936 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.192 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.193 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.450 23:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.377 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.634 23:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.198 00:15:23.198 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.198 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.198 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.456 { 00:15:23.456 "cntlid": 61, 00:15:23.456 "qid": 0, 00:15:23.456 "state": "enabled", 00:15:23.456 "thread": "nvmf_tgt_poll_group_000", 00:15:23.456 "listen_address": { 00:15:23.456 "trtype": "TCP", 00:15:23.456 "adrfam": "IPv4", 00:15:23.456 "traddr": "10.0.0.2", 00:15:23.456 "trsvcid": "4420" 00:15:23.456 }, 00:15:23.456 "peer_address": { 00:15:23.456 "trtype": "TCP", 00:15:23.456 "adrfam": "IPv4", 00:15:23.456 "traddr": "10.0.0.1", 00:15:23.456 "trsvcid": "40258" 00:15:23.456 }, 00:15:23.456 "auth": { 00:15:23.456 "state": "completed", 00:15:23.456 "digest": "sha384", 00:15:23.456 "dhgroup": "ffdhe2048" 00:15:23.456 } 00:15:23.456 } 00:15:23.456 ]' 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.456 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.713 23:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.644 23:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.902 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.465 00:15:25.465 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.465 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.465 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.722 { 00:15:25.722 "cntlid": 63, 00:15:25.722 "qid": 0, 00:15:25.722 "state": "enabled", 00:15:25.722 "thread": "nvmf_tgt_poll_group_000", 00:15:25.722 "listen_address": { 00:15:25.722 "trtype": "TCP", 00:15:25.722 "adrfam": "IPv4", 00:15:25.722 "traddr": "10.0.0.2", 00:15:25.722 "trsvcid": "4420" 00:15:25.722 }, 00:15:25.722 "peer_address": { 00:15:25.722 "trtype": "TCP", 00:15:25.722 "adrfam": "IPv4", 00:15:25.722 "traddr": "10.0.0.1", 00:15:25.722 "trsvcid": "59562" 00:15:25.722 }, 00:15:25.722 "auth": { 00:15:25.722 "state": "completed", 00:15:25.722 "digest": "sha384", 00:15:25.722 "dhgroup": "ffdhe2048" 00:15:25.722 } 00:15:25.722 } 00:15:25.722 ]' 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.722 23:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.979 23:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.909 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.165 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.747 00:15:27.747 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.747 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.747 23:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.747 { 00:15:27.747 "cntlid": 65, 00:15:27.747 "qid": 0, 00:15:27.747 "state": "enabled", 00:15:27.747 "thread": "nvmf_tgt_poll_group_000", 00:15:27.747 "listen_address": { 00:15:27.747 "trtype": "TCP", 00:15:27.747 "adrfam": "IPv4", 00:15:27.747 "traddr": "10.0.0.2", 00:15:27.747 "trsvcid": "4420" 00:15:27.747 }, 00:15:27.747 "peer_address": { 00:15:27.747 "trtype": "TCP", 00:15:27.747 "adrfam": "IPv4", 00:15:27.747 "traddr": "10.0.0.1", 00:15:27.747 "trsvcid": "59596" 00:15:27.747 }, 00:15:27.747 "auth": { 00:15:27.747 "state": "completed", 00:15:27.747 "digest": "sha384", 00:15:27.747 "dhgroup": "ffdhe3072" 00:15:27.747 } 00:15:27.747 } 00:15:27.747 ]' 00:15:27.747 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.032 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.294 23:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.226 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.483 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.741 00:15:29.741 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.741 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.741 23:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.998 { 00:15:29.998 "cntlid": 67, 00:15:29.998 "qid": 0, 00:15:29.998 "state": "enabled", 00:15:29.998 "thread": "nvmf_tgt_poll_group_000", 00:15:29.998 "listen_address": { 00:15:29.998 "trtype": "TCP", 00:15:29.998 "adrfam": "IPv4", 00:15:29.998 "traddr": "10.0.0.2", 00:15:29.998 "trsvcid": "4420" 00:15:29.998 }, 00:15:29.998 "peer_address": { 00:15:29.998 "trtype": "TCP", 00:15:29.998 "adrfam": "IPv4", 00:15:29.998 "traddr": "10.0.0.1", 00:15:29.998 "trsvcid": "59626" 00:15:29.998 }, 00:15:29.998 "auth": { 00:15:29.998 "state": "completed", 00:15:29.998 "digest": "sha384", 00:15:29.998 "dhgroup": "ffdhe3072" 00:15:29.998 } 00:15:29.998 } 00:15:29.998 ]' 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.998 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.256 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.256 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.256 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.256 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.256 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.512 23:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.442 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.699 23:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.263 00:15:32.264 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.264 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.264 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.264 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.520 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.520 23:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.521 { 00:15:32.521 "cntlid": 69, 00:15:32.521 "qid": 0, 00:15:32.521 "state": "enabled", 00:15:32.521 "thread": "nvmf_tgt_poll_group_000", 00:15:32.521 "listen_address": { 00:15:32.521 "trtype": "TCP", 00:15:32.521 "adrfam": "IPv4", 00:15:32.521 "traddr": "10.0.0.2", 00:15:32.521 "trsvcid": "4420" 00:15:32.521 }, 00:15:32.521 "peer_address": { 00:15:32.521 "trtype": "TCP", 00:15:32.521 "adrfam": "IPv4", 00:15:32.521 "traddr": "10.0.0.1", 00:15:32.521 "trsvcid": "59644" 00:15:32.521 }, 00:15:32.521 "auth": { 00:15:32.521 "state": "completed", 00:15:32.521 "digest": "sha384", 00:15:32.521 "dhgroup": "ffdhe3072" 00:15:32.521 } 00:15:32.521 } 00:15:32.521 ]' 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.521 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.778 23:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:33.710 23:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.968 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.533 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.533 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.533 { 00:15:34.533 "cntlid": 71, 00:15:34.534 "qid": 0, 00:15:34.534 "state": "enabled", 00:15:34.534 "thread": "nvmf_tgt_poll_group_000", 00:15:34.534 "listen_address": { 00:15:34.534 "trtype": "TCP", 00:15:34.534 "adrfam": "IPv4", 00:15:34.534 "traddr": "10.0.0.2", 00:15:34.534 "trsvcid": "4420" 00:15:34.534 }, 00:15:34.534 "peer_address": { 00:15:34.534 "trtype": "TCP", 00:15:34.534 "adrfam": "IPv4", 00:15:34.534 "traddr": "10.0.0.1", 00:15:34.534 "trsvcid": "59670" 00:15:34.534 }, 00:15:34.534 "auth": { 00:15:34.534 "state": "completed", 00:15:34.534 "digest": "sha384", 00:15:34.534 "dhgroup": "ffdhe3072" 00:15:34.534 } 00:15:34.534 } 00:15:34.534 ]' 00:15:34.534 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.791 23:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.049 23:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.982 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.239 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.803 00:15:36.803 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.803 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.803 23:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.803 { 00:15:36.803 "cntlid": 73, 00:15:36.803 "qid": 0, 00:15:36.803 "state": "enabled", 00:15:36.803 "thread": "nvmf_tgt_poll_group_000", 00:15:36.803 "listen_address": { 00:15:36.803 "trtype": "TCP", 00:15:36.803 "adrfam": "IPv4", 00:15:36.803 "traddr": "10.0.0.2", 00:15:36.803 "trsvcid": "4420" 00:15:36.803 }, 00:15:36.803 "peer_address": { 00:15:36.803 "trtype": "TCP", 00:15:36.803 "adrfam": "IPv4", 00:15:36.803 "traddr": "10.0.0.1", 00:15:36.803 "trsvcid": "42926" 00:15:36.803 }, 00:15:36.803 "auth": { 00:15:36.803 "state": "completed", 00:15:36.803 "digest": "sha384", 00:15:36.803 "dhgroup": "ffdhe4096" 00:15:36.803 } 00:15:36.803 } 00:15:36.803 ]' 00:15:36.803 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.060 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.317 23:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.250 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.507 23:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.763 00:15:38.763 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.763 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.763 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.020 { 00:15:39.020 "cntlid": 75, 00:15:39.020 "qid": 0, 00:15:39.020 "state": "enabled", 00:15:39.020 "thread": "nvmf_tgt_poll_group_000", 00:15:39.020 "listen_address": { 00:15:39.020 "trtype": "TCP", 00:15:39.020 "adrfam": "IPv4", 00:15:39.020 "traddr": "10.0.0.2", 00:15:39.020 "trsvcid": "4420" 00:15:39.020 }, 00:15:39.020 "peer_address": { 00:15:39.020 "trtype": "TCP", 00:15:39.020 "adrfam": "IPv4", 00:15:39.020 "traddr": "10.0.0.1", 00:15:39.020 "trsvcid": "42966" 00:15:39.020 }, 00:15:39.020 "auth": { 00:15:39.020 "state": "completed", 00:15:39.020 "digest": "sha384", 00:15:39.020 "dhgroup": "ffdhe4096" 00:15:39.020 } 00:15:39.020 } 00:15:39.020 ]' 00:15:39.020 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.277 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.534 23:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.467 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.724 23:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.981 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.254 { 00:15:41.254 "cntlid": 77, 00:15:41.254 "qid": 0, 00:15:41.254 "state": "enabled", 00:15:41.254 "thread": "nvmf_tgt_poll_group_000", 00:15:41.254 "listen_address": { 00:15:41.254 "trtype": "TCP", 00:15:41.254 "adrfam": "IPv4", 00:15:41.254 "traddr": "10.0.0.2", 00:15:41.254 "trsvcid": "4420" 00:15:41.254 }, 00:15:41.254 "peer_address": { 00:15:41.254 "trtype": "TCP", 00:15:41.254 "adrfam": "IPv4", 00:15:41.254 "traddr": "10.0.0.1", 00:15:41.254 "trsvcid": "42992" 00:15:41.254 }, 00:15:41.254 "auth": { 00:15:41.254 "state": "completed", 00:15:41.254 "digest": "sha384", 00:15:41.254 "dhgroup": "ffdhe4096" 00:15:41.254 } 00:15:41.254 } 00:15:41.254 ]' 00:15:41.254 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.511 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.769 23:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:42.700 23:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.958 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.215 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.473 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.731 { 00:15:43.731 "cntlid": 79, 00:15:43.731 "qid": 0, 00:15:43.731 "state": "enabled", 00:15:43.731 "thread": "nvmf_tgt_poll_group_000", 00:15:43.731 "listen_address": { 00:15:43.731 "trtype": "TCP", 00:15:43.731 "adrfam": "IPv4", 00:15:43.731 "traddr": "10.0.0.2", 00:15:43.731 "trsvcid": "4420" 00:15:43.731 }, 00:15:43.731 "peer_address": { 00:15:43.731 "trtype": "TCP", 00:15:43.731 "adrfam": "IPv4", 00:15:43.731 "traddr": "10.0.0.1", 00:15:43.731 "trsvcid": "43016" 00:15:43.731 }, 00:15:43.731 "auth": { 00:15:43.731 "state": "completed", 00:15:43.731 "digest": "sha384", 00:15:43.731 "dhgroup": "ffdhe4096" 00:15:43.731 } 00:15:43.731 } 00:15:43.731 ]' 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.731 23:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.989 23:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:44.920 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.920 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:44.920 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.921 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.177 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.740 00:15:45.740 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.740 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.740 23:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.998 { 00:15:45.998 "cntlid": 81, 00:15:45.998 "qid": 0, 00:15:45.998 "state": "enabled", 00:15:45.998 "thread": "nvmf_tgt_poll_group_000", 00:15:45.998 "listen_address": { 00:15:45.998 "trtype": "TCP", 00:15:45.998 "adrfam": "IPv4", 00:15:45.998 "traddr": "10.0.0.2", 00:15:45.998 "trsvcid": "4420" 00:15:45.998 }, 00:15:45.998 "peer_address": { 00:15:45.998 "trtype": "TCP", 00:15:45.998 "adrfam": "IPv4", 00:15:45.998 "traddr": "10.0.0.1", 00:15:45.998 "trsvcid": "48306" 00:15:45.998 }, 00:15:45.998 "auth": { 00:15:45.998 "state": "completed", 00:15:45.998 "digest": "sha384", 00:15:45.998 "dhgroup": "ffdhe6144" 00:15:45.998 } 00:15:45.998 } 00:15:45.998 ]' 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.998 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.255 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.255 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.255 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.511 23:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.440 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.441 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.441 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.698 23:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.261 00:15:48.261 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.261 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.261 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.518 { 00:15:48.518 "cntlid": 83, 00:15:48.518 "qid": 0, 00:15:48.518 "state": "enabled", 00:15:48.518 "thread": "nvmf_tgt_poll_group_000", 00:15:48.518 "listen_address": { 00:15:48.518 "trtype": "TCP", 00:15:48.518 "adrfam": "IPv4", 00:15:48.518 "traddr": "10.0.0.2", 00:15:48.518 "trsvcid": "4420" 00:15:48.518 }, 00:15:48.518 "peer_address": { 00:15:48.518 "trtype": "TCP", 00:15:48.518 "adrfam": "IPv4", 00:15:48.518 "traddr": "10.0.0.1", 00:15:48.518 "trsvcid": "48328" 00:15:48.518 }, 00:15:48.518 "auth": { 00:15:48.518 "state": "completed", 00:15:48.518 "digest": "sha384", 00:15:48.518 "dhgroup": "ffdhe6144" 00:15:48.518 } 00:15:48.518 } 00:15:48.518 ]' 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.518 23:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.775 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.707 23:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.965 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.563 00:15:50.563 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.563 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.563 23:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.848 { 00:15:50.848 "cntlid": 85, 00:15:50.848 "qid": 0, 00:15:50.848 "state": "enabled", 00:15:50.848 "thread": "nvmf_tgt_poll_group_000", 00:15:50.848 "listen_address": { 00:15:50.848 "trtype": "TCP", 00:15:50.848 "adrfam": "IPv4", 00:15:50.848 "traddr": "10.0.0.2", 00:15:50.848 "trsvcid": "4420" 00:15:50.848 }, 00:15:50.848 "peer_address": { 00:15:50.848 "trtype": "TCP", 00:15:50.848 "adrfam": "IPv4", 00:15:50.848 "traddr": "10.0.0.1", 00:15:50.848 "trsvcid": "48350" 00:15:50.848 }, 00:15:50.848 "auth": { 00:15:50.848 "state": "completed", 00:15:50.848 "digest": "sha384", 00:15:50.848 "dhgroup": "ffdhe6144" 00:15:50.848 } 00:15:50.848 } 00:15:50.848 ]' 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.848 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.106 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.106 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.106 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.363 23:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:15:52.294 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.295 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.551 23:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.552 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.552 23:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.114 00:15:53.114 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.114 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.114 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.370 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.370 { 00:15:53.370 "cntlid": 87, 00:15:53.370 "qid": 0, 00:15:53.370 "state": "enabled", 00:15:53.370 "thread": "nvmf_tgt_poll_group_000", 00:15:53.370 "listen_address": { 00:15:53.370 "trtype": "TCP", 00:15:53.370 "adrfam": "IPv4", 00:15:53.370 "traddr": "10.0.0.2", 00:15:53.371 "trsvcid": "4420" 00:15:53.371 }, 00:15:53.371 "peer_address": { 00:15:53.371 "trtype": "TCP", 00:15:53.371 "adrfam": "IPv4", 00:15:53.371 "traddr": "10.0.0.1", 00:15:53.371 "trsvcid": "48382" 00:15:53.371 }, 00:15:53.371 "auth": { 00:15:53.371 "state": "completed", 00:15:53.371 "digest": "sha384", 00:15:53.371 "dhgroup": "ffdhe6144" 00:15:53.371 } 00:15:53.371 } 00:15:53.371 ]' 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.371 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.629 23:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.560 23:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.816 23:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.073 23:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.073 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.073 23:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.002 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.002 { 00:15:56.002 "cntlid": 89, 00:15:56.002 "qid": 0, 00:15:56.002 "state": "enabled", 00:15:56.002 "thread": "nvmf_tgt_poll_group_000", 00:15:56.002 "listen_address": { 00:15:56.002 "trtype": "TCP", 00:15:56.002 "adrfam": "IPv4", 00:15:56.002 "traddr": "10.0.0.2", 00:15:56.002 "trsvcid": "4420" 00:15:56.002 }, 00:15:56.002 "peer_address": { 00:15:56.002 "trtype": "TCP", 00:15:56.002 "adrfam": "IPv4", 00:15:56.002 "traddr": "10.0.0.1", 00:15:56.002 "trsvcid": "48396" 00:15:56.002 }, 00:15:56.002 "auth": { 00:15:56.002 "state": "completed", 00:15:56.002 "digest": "sha384", 00:15:56.002 "dhgroup": "ffdhe8192" 00:15:56.002 } 00:15:56.002 } 00:15:56.002 ]' 00:15:56.002 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.259 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.515 23:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.446 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.704 23:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.634 00:15:58.634 23:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.634 23:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.634 23:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.892 { 00:15:58.892 "cntlid": 91, 00:15:58.892 "qid": 0, 00:15:58.892 "state": "enabled", 00:15:58.892 "thread": "nvmf_tgt_poll_group_000", 00:15:58.892 "listen_address": { 00:15:58.892 "trtype": "TCP", 00:15:58.892 "adrfam": "IPv4", 00:15:58.892 "traddr": "10.0.0.2", 00:15:58.892 "trsvcid": "4420" 00:15:58.892 }, 00:15:58.892 "peer_address": { 00:15:58.892 "trtype": "TCP", 00:15:58.892 "adrfam": "IPv4", 00:15:58.892 "traddr": "10.0.0.1", 00:15:58.892 "trsvcid": "48076" 00:15:58.892 }, 00:15:58.892 "auth": { 00:15:58.892 "state": "completed", 00:15:58.892 "digest": "sha384", 00:15:58.892 "dhgroup": "ffdhe8192" 00:15:58.892 } 00:15:58.892 } 00:15:58.892 ]' 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.892 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.149 23:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.081 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.339 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.340 23:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.273 00:16:01.273 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.273 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.273 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.532 { 00:16:01.532 "cntlid": 93, 00:16:01.532 "qid": 0, 00:16:01.532 "state": "enabled", 00:16:01.532 "thread": "nvmf_tgt_poll_group_000", 00:16:01.532 "listen_address": { 00:16:01.532 "trtype": "TCP", 00:16:01.532 "adrfam": "IPv4", 00:16:01.532 "traddr": "10.0.0.2", 00:16:01.532 "trsvcid": "4420" 00:16:01.532 }, 00:16:01.532 "peer_address": { 00:16:01.532 "trtype": "TCP", 00:16:01.532 "adrfam": "IPv4", 00:16:01.532 "traddr": "10.0.0.1", 00:16:01.532 "trsvcid": "48104" 00:16:01.532 }, 00:16:01.532 "auth": { 00:16:01.532 "state": "completed", 00:16:01.532 "digest": "sha384", 00:16:01.532 "dhgroup": "ffdhe8192" 00:16:01.532 } 00:16:01.532 } 00:16:01.532 ]' 00:16:01.532 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.789 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.790 23:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.047 23:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.979 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.543 23:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.473 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.473 { 00:16:04.473 "cntlid": 95, 00:16:04.473 "qid": 0, 00:16:04.473 "state": "enabled", 00:16:04.473 "thread": "nvmf_tgt_poll_group_000", 00:16:04.473 "listen_address": { 00:16:04.473 "trtype": "TCP", 00:16:04.473 "adrfam": "IPv4", 00:16:04.473 "traddr": "10.0.0.2", 00:16:04.473 "trsvcid": "4420" 00:16:04.473 }, 00:16:04.473 "peer_address": { 00:16:04.473 "trtype": "TCP", 00:16:04.473 "adrfam": "IPv4", 00:16:04.473 "traddr": "10.0.0.1", 00:16:04.473 "trsvcid": "48138" 00:16:04.473 }, 00:16:04.473 "auth": { 00:16:04.473 "state": "completed", 00:16:04.473 "digest": "sha384", 00:16:04.473 "dhgroup": "ffdhe8192" 00:16:04.473 } 00:16:04.473 } 00:16:04.473 ]' 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.473 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.730 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.730 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.730 23:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.988 23:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.919 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.176 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.433 00:16:06.433 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.433 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.433 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.691 { 00:16:06.691 "cntlid": 97, 00:16:06.691 "qid": 0, 00:16:06.691 "state": "enabled", 00:16:06.691 "thread": "nvmf_tgt_poll_group_000", 00:16:06.691 "listen_address": { 00:16:06.691 "trtype": "TCP", 00:16:06.691 "adrfam": "IPv4", 00:16:06.691 "traddr": "10.0.0.2", 00:16:06.691 "trsvcid": "4420" 00:16:06.691 }, 00:16:06.691 "peer_address": { 00:16:06.691 "trtype": "TCP", 00:16:06.691 "adrfam": "IPv4", 00:16:06.691 "traddr": "10.0.0.1", 00:16:06.691 "trsvcid": "54886" 00:16:06.691 }, 00:16:06.691 "auth": { 00:16:06.691 "state": "completed", 00:16:06.691 "digest": "sha512", 00:16:06.691 "dhgroup": "null" 00:16:06.691 } 00:16:06.691 } 00:16:06.691 ]' 00:16:06.691 23:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.948 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.204 23:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.155 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.411 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.667 00:16:08.667 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.667 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.667 23:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.924 { 00:16:08.924 "cntlid": 99, 00:16:08.924 "qid": 0, 00:16:08.924 "state": "enabled", 00:16:08.924 "thread": "nvmf_tgt_poll_group_000", 00:16:08.924 "listen_address": { 00:16:08.924 "trtype": "TCP", 00:16:08.924 "adrfam": "IPv4", 00:16:08.924 "traddr": "10.0.0.2", 00:16:08.924 "trsvcid": "4420" 00:16:08.924 }, 00:16:08.924 "peer_address": { 00:16:08.924 "trtype": "TCP", 00:16:08.924 "adrfam": "IPv4", 00:16:08.924 "traddr": "10.0.0.1", 00:16:08.924 "trsvcid": "54918" 00:16:08.924 }, 00:16:08.924 "auth": { 00:16:08.924 "state": "completed", 00:16:08.924 "digest": "sha512", 00:16:08.924 "dhgroup": "null" 00:16:08.924 } 00:16:08.924 } 00:16:08.924 ]' 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.924 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.180 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:09.180 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.180 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.180 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.180 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.436 23:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.367 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.624 23:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.881 00:16:10.881 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.881 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.881 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.138 { 00:16:11.138 "cntlid": 101, 00:16:11.138 "qid": 0, 00:16:11.138 "state": "enabled", 00:16:11.138 "thread": "nvmf_tgt_poll_group_000", 00:16:11.138 "listen_address": { 00:16:11.138 "trtype": "TCP", 00:16:11.138 "adrfam": "IPv4", 00:16:11.138 "traddr": "10.0.0.2", 00:16:11.138 "trsvcid": "4420" 00:16:11.138 }, 00:16:11.138 "peer_address": { 00:16:11.138 "trtype": "TCP", 00:16:11.138 "adrfam": "IPv4", 00:16:11.138 "traddr": "10.0.0.1", 00:16:11.138 "trsvcid": "54952" 00:16:11.138 }, 00:16:11.138 "auth": { 00:16:11.138 "state": "completed", 00:16:11.138 "digest": "sha512", 00:16:11.138 "dhgroup": "null" 00:16:11.138 } 00:16:11.138 } 00:16:11.138 ]' 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.138 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.396 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:11.396 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.396 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.396 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.396 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.652 23:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.584 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:12.841 23:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.842 23:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.842 23:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.842 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.842 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.099 00:16:13.099 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.099 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.099 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.375 { 00:16:13.375 "cntlid": 103, 00:16:13.375 "qid": 0, 00:16:13.375 "state": "enabled", 00:16:13.375 "thread": "nvmf_tgt_poll_group_000", 00:16:13.375 "listen_address": { 00:16:13.375 "trtype": "TCP", 00:16:13.375 "adrfam": "IPv4", 00:16:13.375 "traddr": "10.0.0.2", 00:16:13.375 "trsvcid": "4420" 00:16:13.375 }, 00:16:13.375 "peer_address": { 00:16:13.375 "trtype": "TCP", 00:16:13.375 "adrfam": "IPv4", 00:16:13.375 "traddr": "10.0.0.1", 00:16:13.375 "trsvcid": "54986" 00:16:13.375 }, 00:16:13.375 "auth": { 00:16:13.375 "state": "completed", 00:16:13.375 "digest": "sha512", 00:16:13.375 "dhgroup": "null" 00:16:13.375 } 00:16:13.375 } 00:16:13.375 ]' 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.375 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.712 23:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.645 23:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.903 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.466 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.723 { 00:16:15.723 "cntlid": 105, 00:16:15.723 "qid": 0, 00:16:15.723 "state": "enabled", 00:16:15.723 "thread": "nvmf_tgt_poll_group_000", 00:16:15.723 "listen_address": { 00:16:15.723 "trtype": "TCP", 00:16:15.723 "adrfam": "IPv4", 00:16:15.723 "traddr": "10.0.0.2", 00:16:15.723 "trsvcid": "4420" 00:16:15.723 }, 00:16:15.723 "peer_address": { 00:16:15.723 "trtype": "TCP", 00:16:15.723 "adrfam": "IPv4", 00:16:15.723 "traddr": "10.0.0.1", 00:16:15.723 "trsvcid": "47850" 00:16:15.723 }, 00:16:15.723 "auth": { 00:16:15.723 "state": "completed", 00:16:15.723 "digest": "sha512", 00:16:15.723 "dhgroup": "ffdhe2048" 00:16:15.723 } 00:16:15.723 } 00:16:15.723 ]' 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.723 23:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.981 23:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.910 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.540 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:17.540 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.541 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.541 23:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.797 { 00:16:17.797 "cntlid": 107, 00:16:17.797 "qid": 0, 00:16:17.797 "state": "enabled", 00:16:17.797 "thread": "nvmf_tgt_poll_group_000", 00:16:17.797 "listen_address": { 00:16:17.797 "trtype": "TCP", 00:16:17.797 "adrfam": "IPv4", 00:16:17.797 "traddr": "10.0.0.2", 00:16:17.797 "trsvcid": "4420" 00:16:17.797 }, 00:16:17.797 "peer_address": { 00:16:17.797 "trtype": "TCP", 00:16:17.797 "adrfam": "IPv4", 00:16:17.797 "traddr": "10.0.0.1", 00:16:17.797 "trsvcid": "47866" 00:16:17.797 }, 00:16:17.797 "auth": { 00:16:17.797 "state": "completed", 00:16:17.797 "digest": "sha512", 00:16:17.797 "dhgroup": "ffdhe2048" 00:16:17.797 } 00:16:17.797 } 00:16:17.797 ]' 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.797 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.053 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.053 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.053 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.053 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.053 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.310 23:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.240 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.497 23:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.754 00:16:19.754 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.754 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.754 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.011 { 00:16:20.011 "cntlid": 109, 00:16:20.011 "qid": 0, 00:16:20.011 "state": "enabled", 00:16:20.011 "thread": "nvmf_tgt_poll_group_000", 00:16:20.011 "listen_address": { 00:16:20.011 "trtype": "TCP", 00:16:20.011 "adrfam": "IPv4", 00:16:20.011 "traddr": "10.0.0.2", 00:16:20.011 "trsvcid": "4420" 00:16:20.011 }, 00:16:20.011 "peer_address": { 00:16:20.011 "trtype": "TCP", 00:16:20.011 "adrfam": "IPv4", 00:16:20.011 "traddr": "10.0.0.1", 00:16:20.011 "trsvcid": "47892" 00:16:20.011 }, 00:16:20.011 "auth": { 00:16:20.011 "state": "completed", 00:16:20.011 "digest": "sha512", 00:16:20.011 "dhgroup": "ffdhe2048" 00:16:20.011 } 00:16:20.011 } 00:16:20.011 ]' 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.011 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.267 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.267 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.267 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.267 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.267 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.524 23:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.456 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.713 23:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.971 00:16:21.971 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.971 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.971 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.228 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.228 { 00:16:22.228 "cntlid": 111, 00:16:22.228 "qid": 0, 00:16:22.228 "state": "enabled", 00:16:22.229 "thread": "nvmf_tgt_poll_group_000", 00:16:22.229 "listen_address": { 00:16:22.229 "trtype": "TCP", 00:16:22.229 "adrfam": "IPv4", 00:16:22.229 "traddr": "10.0.0.2", 00:16:22.229 "trsvcid": "4420" 00:16:22.229 }, 00:16:22.229 "peer_address": { 00:16:22.229 "trtype": "TCP", 00:16:22.229 "adrfam": "IPv4", 00:16:22.229 "traddr": "10.0.0.1", 00:16:22.229 "trsvcid": "47932" 00:16:22.229 }, 00:16:22.229 "auth": { 00:16:22.229 "state": "completed", 00:16:22.229 "digest": "sha512", 00:16:22.229 "dhgroup": "ffdhe2048" 00:16:22.229 } 00:16:22.229 } 00:16:22.229 ]' 00:16:22.229 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.229 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.229 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.486 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.486 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.486 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.486 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.486 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.743 23:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.676 23:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.933 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.189 00:16:24.189 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.189 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.189 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.446 { 00:16:24.446 "cntlid": 113, 00:16:24.446 "qid": 0, 00:16:24.446 "state": "enabled", 00:16:24.446 "thread": "nvmf_tgt_poll_group_000", 00:16:24.446 "listen_address": { 00:16:24.446 "trtype": "TCP", 00:16:24.446 "adrfam": "IPv4", 00:16:24.446 "traddr": "10.0.0.2", 00:16:24.446 "trsvcid": "4420" 00:16:24.446 }, 00:16:24.446 "peer_address": { 00:16:24.446 "trtype": "TCP", 00:16:24.446 "adrfam": "IPv4", 00:16:24.446 "traddr": "10.0.0.1", 00:16:24.446 "trsvcid": "47948" 00:16:24.446 }, 00:16:24.446 "auth": { 00:16:24.446 "state": "completed", 00:16:24.446 "digest": "sha512", 00:16:24.446 "dhgroup": "ffdhe3072" 00:16:24.446 } 00:16:24.446 } 00:16:24.446 ]' 00:16:24.446 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.702 23:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.958 23:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.886 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.142 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.703 00:16:26.703 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.703 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.703 23:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.703 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.703 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.703 23:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.703 23:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.960 { 00:16:26.960 "cntlid": 115, 00:16:26.960 "qid": 0, 00:16:26.960 "state": "enabled", 00:16:26.960 "thread": "nvmf_tgt_poll_group_000", 00:16:26.960 "listen_address": { 00:16:26.960 "trtype": "TCP", 00:16:26.960 "adrfam": "IPv4", 00:16:26.960 "traddr": "10.0.0.2", 00:16:26.960 "trsvcid": "4420" 00:16:26.960 }, 00:16:26.960 "peer_address": { 00:16:26.960 "trtype": "TCP", 00:16:26.960 "adrfam": "IPv4", 00:16:26.960 "traddr": "10.0.0.1", 00:16:26.960 "trsvcid": "55162" 00:16:26.960 }, 00:16:26.960 "auth": { 00:16:26.960 "state": "completed", 00:16:26.960 "digest": "sha512", 00:16:26.960 "dhgroup": "ffdhe3072" 00:16:26.960 } 00:16:26.960 } 00:16:26.960 ]' 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.960 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.217 23:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.151 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.408 23:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.972 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.972 { 00:16:28.972 "cntlid": 117, 00:16:28.972 "qid": 0, 00:16:28.972 "state": "enabled", 00:16:28.972 "thread": "nvmf_tgt_poll_group_000", 00:16:28.972 "listen_address": { 00:16:28.972 "trtype": "TCP", 00:16:28.972 "adrfam": "IPv4", 00:16:28.972 "traddr": "10.0.0.2", 00:16:28.972 "trsvcid": "4420" 00:16:28.972 }, 00:16:28.972 "peer_address": { 00:16:28.972 "trtype": "TCP", 00:16:28.972 "adrfam": "IPv4", 00:16:28.972 "traddr": "10.0.0.1", 00:16:28.972 "trsvcid": "55192" 00:16:28.972 }, 00:16:28.972 "auth": { 00:16:28.972 "state": "completed", 00:16:28.972 "digest": "sha512", 00:16:28.972 "dhgroup": "ffdhe3072" 00:16:28.972 } 00:16:28.972 } 00:16:28.972 ]' 00:16:28.972 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.228 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.486 23:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:30.419 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.676 23:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.240 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.240 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.240 { 00:16:31.240 "cntlid": 119, 00:16:31.240 "qid": 0, 00:16:31.240 "state": "enabled", 00:16:31.240 "thread": "nvmf_tgt_poll_group_000", 00:16:31.240 "listen_address": { 00:16:31.240 "trtype": "TCP", 00:16:31.240 "adrfam": "IPv4", 00:16:31.240 "traddr": "10.0.0.2", 00:16:31.240 "trsvcid": "4420" 00:16:31.240 }, 00:16:31.240 "peer_address": { 00:16:31.240 "trtype": "TCP", 00:16:31.240 "adrfam": "IPv4", 00:16:31.240 "traddr": "10.0.0.1", 00:16:31.241 "trsvcid": "55212" 00:16:31.241 }, 00:16:31.241 "auth": { 00:16:31.241 "state": "completed", 00:16:31.241 "digest": "sha512", 00:16:31.241 "dhgroup": "ffdhe3072" 00:16:31.241 } 00:16:31.241 } 00:16:31.241 ]' 00:16:31.241 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.498 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.755 23:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.685 23:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.942 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.199 00:16:33.199 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.199 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.199 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.457 { 00:16:33.457 "cntlid": 121, 00:16:33.457 "qid": 0, 00:16:33.457 "state": "enabled", 00:16:33.457 "thread": "nvmf_tgt_poll_group_000", 00:16:33.457 "listen_address": { 00:16:33.457 "trtype": "TCP", 00:16:33.457 "adrfam": "IPv4", 00:16:33.457 "traddr": "10.0.0.2", 00:16:33.457 "trsvcid": "4420" 00:16:33.457 }, 00:16:33.457 "peer_address": { 00:16:33.457 "trtype": "TCP", 00:16:33.457 "adrfam": "IPv4", 00:16:33.457 "traddr": "10.0.0.1", 00:16:33.457 "trsvcid": "55238" 00:16:33.457 }, 00:16:33.457 "auth": { 00:16:33.457 "state": "completed", 00:16:33.457 "digest": "sha512", 00:16:33.457 "dhgroup": "ffdhe4096" 00:16:33.457 } 00:16:33.457 } 00:16:33.457 ]' 00:16:33.457 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.713 23:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.970 23:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.901 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.158 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.723 00:16:35.723 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.723 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.723 23:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.723 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.723 { 00:16:35.723 "cntlid": 123, 00:16:35.723 "qid": 0, 00:16:35.723 "state": "enabled", 00:16:35.723 "thread": "nvmf_tgt_poll_group_000", 00:16:35.723 "listen_address": { 00:16:35.723 "trtype": "TCP", 00:16:35.723 "adrfam": "IPv4", 00:16:35.723 "traddr": "10.0.0.2", 00:16:35.723 "trsvcid": "4420" 00:16:35.723 }, 00:16:35.723 "peer_address": { 00:16:35.723 "trtype": "TCP", 00:16:35.723 "adrfam": "IPv4", 00:16:35.723 "traddr": "10.0.0.1", 00:16:35.723 "trsvcid": "50650" 00:16:35.724 }, 00:16:35.724 "auth": { 00:16:35.724 "state": "completed", 00:16:35.724 "digest": "sha512", 00:16:35.724 "dhgroup": "ffdhe4096" 00:16:35.724 } 00:16:35.724 } 00:16:35.724 ]' 00:16:35.724 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.017 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.300 23:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.232 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.490 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.747 00:16:37.747 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.747 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.748 23:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.005 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.005 { 00:16:38.005 "cntlid": 125, 00:16:38.005 "qid": 0, 00:16:38.005 "state": "enabled", 00:16:38.005 "thread": "nvmf_tgt_poll_group_000", 00:16:38.005 "listen_address": { 00:16:38.005 "trtype": "TCP", 00:16:38.005 "adrfam": "IPv4", 00:16:38.005 "traddr": "10.0.0.2", 00:16:38.005 "trsvcid": "4420" 00:16:38.005 }, 00:16:38.005 "peer_address": { 00:16:38.005 "trtype": "TCP", 00:16:38.005 "adrfam": "IPv4", 00:16:38.005 "traddr": "10.0.0.1", 00:16:38.005 "trsvcid": "50676" 00:16:38.005 }, 00:16:38.005 "auth": { 00:16:38.005 "state": "completed", 00:16:38.005 "digest": "sha512", 00:16:38.005 "dhgroup": "ffdhe4096" 00:16:38.006 } 00:16:38.006 } 00:16:38.006 ]' 00:16:38.006 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.006 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.006 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.262 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.262 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.262 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.262 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.262 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.520 23:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.451 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.709 23:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.275 00:16:40.275 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.275 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.275 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.532 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.532 { 00:16:40.532 "cntlid": 127, 00:16:40.532 "qid": 0, 00:16:40.532 "state": "enabled", 00:16:40.532 "thread": "nvmf_tgt_poll_group_000", 00:16:40.532 "listen_address": { 00:16:40.532 "trtype": "TCP", 00:16:40.532 "adrfam": "IPv4", 00:16:40.532 "traddr": "10.0.0.2", 00:16:40.532 "trsvcid": "4420" 00:16:40.532 }, 00:16:40.532 "peer_address": { 00:16:40.532 "trtype": "TCP", 00:16:40.532 "adrfam": "IPv4", 00:16:40.532 "traddr": "10.0.0.1", 00:16:40.532 "trsvcid": "50712" 00:16:40.532 }, 00:16:40.532 "auth": { 00:16:40.532 "state": "completed", 00:16:40.532 "digest": "sha512", 00:16:40.532 "dhgroup": "ffdhe4096" 00:16:40.532 } 00:16:40.532 } 00:16:40.532 ]' 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.533 23:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.790 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.723 23:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.980 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.545 00:16:42.545 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.545 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.545 23:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.803 { 00:16:42.803 "cntlid": 129, 00:16:42.803 "qid": 0, 00:16:42.803 "state": "enabled", 00:16:42.803 "thread": "nvmf_tgt_poll_group_000", 00:16:42.803 "listen_address": { 00:16:42.803 "trtype": "TCP", 00:16:42.803 "adrfam": "IPv4", 00:16:42.803 "traddr": "10.0.0.2", 00:16:42.803 "trsvcid": "4420" 00:16:42.803 }, 00:16:42.803 "peer_address": { 00:16:42.803 "trtype": "TCP", 00:16:42.803 "adrfam": "IPv4", 00:16:42.803 "traddr": "10.0.0.1", 00:16:42.803 "trsvcid": "50740" 00:16:42.803 }, 00:16:42.803 "auth": { 00:16:42.803 "state": "completed", 00:16:42.803 "digest": "sha512", 00:16:42.803 "dhgroup": "ffdhe6144" 00:16:42.803 } 00:16:42.803 } 00:16:42.803 ]' 00:16:42.803 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.061 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.317 23:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.249 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.506 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.764 23:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.764 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.764 23:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.330 00:16:45.330 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.330 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.330 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.588 { 00:16:45.588 "cntlid": 131, 00:16:45.588 "qid": 0, 00:16:45.588 "state": "enabled", 00:16:45.588 "thread": "nvmf_tgt_poll_group_000", 00:16:45.588 "listen_address": { 00:16:45.588 "trtype": "TCP", 00:16:45.588 "adrfam": "IPv4", 00:16:45.588 "traddr": "10.0.0.2", 00:16:45.588 "trsvcid": "4420" 00:16:45.588 }, 00:16:45.588 "peer_address": { 00:16:45.588 "trtype": "TCP", 00:16:45.588 "adrfam": "IPv4", 00:16:45.588 "traddr": "10.0.0.1", 00:16:45.588 "trsvcid": "50748" 00:16:45.588 }, 00:16:45.588 "auth": { 00:16:45.588 "state": "completed", 00:16:45.588 "digest": "sha512", 00:16:45.588 "dhgroup": "ffdhe6144" 00:16:45.588 } 00:16:45.588 } 00:16:45.588 ]' 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.588 23:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.845 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.775 23:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.033 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.597 00:16:47.597 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.597 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.597 23:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.855 { 00:16:47.855 "cntlid": 133, 00:16:47.855 "qid": 0, 00:16:47.855 "state": "enabled", 00:16:47.855 "thread": "nvmf_tgt_poll_group_000", 00:16:47.855 "listen_address": { 00:16:47.855 "trtype": "TCP", 00:16:47.855 "adrfam": "IPv4", 00:16:47.855 "traddr": "10.0.0.2", 00:16:47.855 "trsvcid": "4420" 00:16:47.855 }, 00:16:47.855 "peer_address": { 00:16:47.855 "trtype": "TCP", 00:16:47.855 "adrfam": "IPv4", 00:16:47.855 "traddr": "10.0.0.1", 00:16:47.855 "trsvcid": "34704" 00:16:47.855 }, 00:16:47.855 "auth": { 00:16:47.855 "state": "completed", 00:16:47.855 "digest": "sha512", 00:16:47.855 "dhgroup": "ffdhe6144" 00:16:47.855 } 00:16:47.855 } 00:16:47.855 ]' 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.855 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.113 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.113 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.113 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.113 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.113 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.371 23:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.300 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.558 23:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.559 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.559 23:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.124 00:16:50.124 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.124 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.124 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.381 { 00:16:50.381 "cntlid": 135, 00:16:50.381 "qid": 0, 00:16:50.381 "state": "enabled", 00:16:50.381 "thread": "nvmf_tgt_poll_group_000", 00:16:50.381 "listen_address": { 00:16:50.381 "trtype": "TCP", 00:16:50.381 "adrfam": "IPv4", 00:16:50.381 "traddr": "10.0.0.2", 00:16:50.381 "trsvcid": "4420" 00:16:50.381 }, 00:16:50.381 "peer_address": { 00:16:50.381 "trtype": "TCP", 00:16:50.381 "adrfam": "IPv4", 00:16:50.381 "traddr": "10.0.0.1", 00:16:50.381 "trsvcid": "34728" 00:16:50.381 }, 00:16:50.381 "auth": { 00:16:50.381 "state": "completed", 00:16:50.381 "digest": "sha512", 00:16:50.381 "dhgroup": "ffdhe6144" 00:16:50.381 } 00:16:50.381 } 00:16:50.381 ]' 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.381 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.638 23:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.568 23:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.824 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.754 00:16:52.754 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.754 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.754 23:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.010 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.010 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.011 { 00:16:53.011 "cntlid": 137, 00:16:53.011 "qid": 0, 00:16:53.011 "state": "enabled", 00:16:53.011 "thread": "nvmf_tgt_poll_group_000", 00:16:53.011 "listen_address": { 00:16:53.011 "trtype": "TCP", 00:16:53.011 "adrfam": "IPv4", 00:16:53.011 "traddr": "10.0.0.2", 00:16:53.011 "trsvcid": "4420" 00:16:53.011 }, 00:16:53.011 "peer_address": { 00:16:53.011 "trtype": "TCP", 00:16:53.011 "adrfam": "IPv4", 00:16:53.011 "traddr": "10.0.0.1", 00:16:53.011 "trsvcid": "34758" 00:16:53.011 }, 00:16:53.011 "auth": { 00:16:53.011 "state": "completed", 00:16:53.011 "digest": "sha512", 00:16:53.011 "dhgroup": "ffdhe8192" 00:16:53.011 } 00:16:53.011 } 00:16:53.011 ]' 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.011 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.266 23:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:16:54.196 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.453 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.711 23:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.644 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.644 23:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.901 23:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.901 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.901 { 00:16:55.901 "cntlid": 139, 00:16:55.901 "qid": 0, 00:16:55.901 "state": "enabled", 00:16:55.901 "thread": "nvmf_tgt_poll_group_000", 00:16:55.901 "listen_address": { 00:16:55.901 "trtype": "TCP", 00:16:55.901 "adrfam": "IPv4", 00:16:55.901 "traddr": "10.0.0.2", 00:16:55.901 "trsvcid": "4420" 00:16:55.901 }, 00:16:55.901 "peer_address": { 00:16:55.901 "trtype": "TCP", 00:16:55.901 "adrfam": "IPv4", 00:16:55.901 "traddr": "10.0.0.1", 00:16:55.901 "trsvcid": "34792" 00:16:55.901 }, 00:16:55.901 "auth": { 00:16:55.901 "state": "completed", 00:16:55.901 "digest": "sha512", 00:16:55.901 "dhgroup": "ffdhe8192" 00:16:55.901 } 00:16:55.901 } 00:16:55.901 ]' 00:16:55.901 23:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.901 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.159 23:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:NzRlNDllZDZiMzY5NTI3ZTgyMDczZjhhZmEzYjU5NTGFKtjo: --dhchap-ctrl-secret DHHC-1:02:Y2ViZTZjMjNkNWY4ZTc5ZWI0OTRkZDNhNzJmNjQ4MzgyNWMyZjNmOGE1ZWNmZTJmNRhF4A==: 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.092 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.350 23:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.281 00:16:58.281 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.281 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.281 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.539 { 00:16:58.539 "cntlid": 141, 00:16:58.539 "qid": 0, 00:16:58.539 "state": "enabled", 00:16:58.539 "thread": "nvmf_tgt_poll_group_000", 00:16:58.539 "listen_address": { 00:16:58.539 "trtype": "TCP", 00:16:58.539 "adrfam": "IPv4", 00:16:58.539 "traddr": "10.0.0.2", 00:16:58.539 "trsvcid": "4420" 00:16:58.539 }, 00:16:58.539 "peer_address": { 00:16:58.539 "trtype": "TCP", 00:16:58.539 "adrfam": "IPv4", 00:16:58.539 "traddr": "10.0.0.1", 00:16:58.539 "trsvcid": "54688" 00:16:58.539 }, 00:16:58.539 "auth": { 00:16:58.539 "state": "completed", 00:16:58.539 "digest": "sha512", 00:16:58.539 "dhgroup": "ffdhe8192" 00:16:58.539 } 00:16:58.539 } 00:16:58.539 ]' 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.539 23:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.796 23:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Yjk5M2MyMmFmN2ZkNmEwZmZiMzlhNWQ2OWEzZDU5ODBhNjhlNGUwMDU0NmJmYmQ3f1jA0Q==: --dhchap-ctrl-secret DHHC-1:01:ZGMwOTQwYWVlNGZhMDM4YTA4MDZmYzkyNjU4MDE2MmLcU4Wx: 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.781 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.346 23:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.278 00:17:01.278 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.278 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.278 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.278 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.279 { 00:17:01.279 "cntlid": 143, 00:17:01.279 "qid": 0, 00:17:01.279 "state": "enabled", 00:17:01.279 "thread": "nvmf_tgt_poll_group_000", 00:17:01.279 "listen_address": { 00:17:01.279 "trtype": "TCP", 00:17:01.279 "adrfam": "IPv4", 00:17:01.279 "traddr": "10.0.0.2", 00:17:01.279 "trsvcid": "4420" 00:17:01.279 }, 00:17:01.279 "peer_address": { 00:17:01.279 "trtype": "TCP", 00:17:01.279 "adrfam": "IPv4", 00:17:01.279 "traddr": "10.0.0.1", 00:17:01.279 "trsvcid": "54718" 00:17:01.279 }, 00:17:01.279 "auth": { 00:17:01.279 "state": "completed", 00:17:01.279 "digest": "sha512", 00:17:01.279 "dhgroup": "ffdhe8192" 00:17:01.279 } 00:17:01.279 } 00:17:01.279 ]' 00:17:01.279 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.536 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.794 23:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.725 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.726 23:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.983 23:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.913 00:17:03.913 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.913 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.913 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.170 { 00:17:04.170 "cntlid": 145, 00:17:04.170 "qid": 0, 00:17:04.170 "state": "enabled", 00:17:04.170 "thread": "nvmf_tgt_poll_group_000", 00:17:04.170 "listen_address": { 00:17:04.170 "trtype": "TCP", 00:17:04.170 "adrfam": "IPv4", 00:17:04.170 "traddr": "10.0.0.2", 00:17:04.170 "trsvcid": "4420" 00:17:04.170 }, 00:17:04.170 "peer_address": { 00:17:04.170 "trtype": "TCP", 00:17:04.170 "adrfam": "IPv4", 00:17:04.170 "traddr": "10.0.0.1", 00:17:04.170 "trsvcid": "54742" 00:17:04.170 }, 00:17:04.170 "auth": { 00:17:04.170 "state": "completed", 00:17:04.170 "digest": "sha512", 00:17:04.170 "dhgroup": "ffdhe8192" 00:17:04.170 } 00:17:04.170 } 00:17:04.170 ]' 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.170 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.427 23:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZTBjNTgzZTIyM2Y0NDIwMjFiMWU3MDRmMzA5OTVhZGY1MGI2YWY0Y2Y3NmY4YjhlDgIwtw==: --dhchap-ctrl-secret DHHC-1:03:Y2I3Y2FiNzA1YTI0NjI3YTEwNGY5Y2M4ODA4YTc2MWNmNTdjYTY2MWZjNDdmZDExMWExZTllMGM1NWI5YmY4McqrS28=: 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.360 23:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:05.618 23:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:06.183 request: 00:17:06.183 { 00:17:06.183 "name": "nvme0", 00:17:06.183 "trtype": "tcp", 00:17:06.183 "traddr": "10.0.0.2", 00:17:06.183 "adrfam": "ipv4", 00:17:06.183 "trsvcid": "4420", 00:17:06.183 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.183 "prchk_reftag": false, 00:17:06.183 "prchk_guard": false, 00:17:06.183 "hdgst": false, 00:17:06.183 "ddgst": false, 00:17:06.183 "dhchap_key": "key2", 00:17:06.183 "method": "bdev_nvme_attach_controller", 00:17:06.183 "req_id": 1 00:17:06.183 } 00:17:06.183 Got JSON-RPC error response 00:17:06.183 response: 00:17:06.183 { 00:17:06.183 "code": -5, 00:17:06.183 "message": "Input/output error" 00:17:06.183 } 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.183 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.439 23:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:06.440 23:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.371 request: 00:17:07.371 { 00:17:07.371 "name": "nvme0", 00:17:07.371 "trtype": "tcp", 00:17:07.371 "traddr": "10.0.0.2", 00:17:07.371 "adrfam": "ipv4", 00:17:07.371 "trsvcid": "4420", 00:17:07.371 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:07.371 "prchk_reftag": false, 00:17:07.371 "prchk_guard": false, 00:17:07.371 "hdgst": false, 00:17:07.371 "ddgst": false, 00:17:07.371 "dhchap_key": "key1", 00:17:07.371 "dhchap_ctrlr_key": "ckey2", 00:17:07.371 "method": "bdev_nvme_attach_controller", 00:17:07.371 "req_id": 1 00:17:07.371 } 00:17:07.371 Got JSON-RPC error response 00:17:07.371 response: 00:17:07.371 { 00:17:07.371 "code": -5, 00:17:07.371 "message": "Input/output error" 00:17:07.371 } 00:17:07.371 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.372 23:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.936 request: 00:17:07.936 { 00:17:07.936 "name": "nvme0", 00:17:07.936 "trtype": "tcp", 00:17:07.936 "traddr": "10.0.0.2", 00:17:07.936 "adrfam": "ipv4", 00:17:07.936 "trsvcid": "4420", 00:17:07.936 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:07.936 "prchk_reftag": false, 00:17:07.936 "prchk_guard": false, 00:17:07.936 "hdgst": false, 00:17:07.936 "ddgst": false, 00:17:07.936 "dhchap_key": "key1", 00:17:07.936 "dhchap_ctrlr_key": "ckey1", 00:17:07.936 "method": "bdev_nvme_attach_controller", 00:17:07.936 "req_id": 1 00:17:07.936 } 00:17:07.936 Got JSON-RPC error response 00:17:07.936 response: 00:17:07.936 { 00:17:07.936 "code": -5, 00:17:07.936 "message": "Input/output error" 00:17:07.936 } 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.936 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2329000 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2329000 ']' 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2329000 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329000 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329000' 00:17:08.194 killing process with pid 2329000 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2329000 00:17:08.194 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2329000 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2351688 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2351688 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2351688 ']' 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.451 23:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2351688 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2351688 ']' 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.383 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.641 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:09.641 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:09.641 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.641 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.899 23:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.832 00:17:10.832 23:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.832 23:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.832 23:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.832 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.832 { 00:17:10.832 "cntlid": 1, 00:17:10.832 "qid": 0, 00:17:10.832 "state": "enabled", 00:17:10.832 "thread": "nvmf_tgt_poll_group_000", 00:17:10.832 "listen_address": { 00:17:10.832 "trtype": "TCP", 00:17:10.832 "adrfam": "IPv4", 00:17:10.832 "traddr": "10.0.0.2", 00:17:10.832 "trsvcid": "4420" 00:17:10.832 }, 00:17:10.832 "peer_address": { 00:17:10.832 "trtype": "TCP", 00:17:10.832 "adrfam": "IPv4", 00:17:10.832 "traddr": "10.0.0.1", 00:17:10.832 "trsvcid": "55452" 00:17:10.832 }, 00:17:10.832 "auth": { 00:17:10.832 "state": "completed", 00:17:10.832 "digest": "sha512", 00:17:10.832 "dhgroup": "ffdhe8192" 00:17:10.833 } 00:17:10.833 } 00:17:10.833 ]' 00:17:10.833 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.091 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.349 23:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTc4N2Q5YjQ1Y2ZhN2YyZmUzMTBlYTk1YmE3MGI5ZmQ3ODQ3YTM5YWVmNzIwYTQwZmZhYTc1Y2VkMjVjNjQ1MSpPd20=: 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.282 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.283 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.283 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:12.283 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.541 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.799 request: 00:17:12.799 { 00:17:12.799 "name": "nvme0", 00:17:12.799 "trtype": "tcp", 00:17:12.799 "traddr": "10.0.0.2", 00:17:12.799 "adrfam": "ipv4", 00:17:12.799 "trsvcid": "4420", 00:17:12.799 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:12.799 "prchk_reftag": false, 00:17:12.799 "prchk_guard": false, 00:17:12.799 "hdgst": false, 00:17:12.799 "ddgst": false, 00:17:12.799 "dhchap_key": "key3", 00:17:12.799 "method": "bdev_nvme_attach_controller", 00:17:12.799 "req_id": 1 00:17:12.799 } 00:17:12.799 Got JSON-RPC error response 00:17:12.799 response: 00:17:12.799 { 00:17:12.799 "code": -5, 00:17:12.799 "message": "Input/output error" 00:17:12.799 } 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:12.799 23:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:13.056 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.057 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.057 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.314 request: 00:17:13.314 { 00:17:13.314 "name": "nvme0", 00:17:13.314 "trtype": "tcp", 00:17:13.314 "traddr": "10.0.0.2", 00:17:13.314 "adrfam": "ipv4", 00:17:13.314 "trsvcid": "4420", 00:17:13.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:13.314 "prchk_reftag": false, 00:17:13.314 "prchk_guard": false, 00:17:13.314 "hdgst": false, 00:17:13.314 "ddgst": false, 00:17:13.314 "dhchap_key": "key3", 00:17:13.314 "method": "bdev_nvme_attach_controller", 00:17:13.314 "req_id": 1 00:17:13.314 } 00:17:13.314 Got JSON-RPC error response 00:17:13.314 response: 00:17:13.314 { 00:17:13.314 "code": -5, 00:17:13.314 "message": "Input/output error" 00:17:13.314 } 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.315 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:13.573 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:13.831 request: 00:17:13.831 { 00:17:13.831 "name": "nvme0", 00:17:13.831 "trtype": "tcp", 00:17:13.831 "traddr": "10.0.0.2", 00:17:13.831 "adrfam": "ipv4", 00:17:13.831 "trsvcid": "4420", 00:17:13.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:13.831 "prchk_reftag": false, 00:17:13.831 "prchk_guard": false, 00:17:13.831 "hdgst": false, 00:17:13.831 "ddgst": false, 00:17:13.831 "dhchap_key": "key0", 00:17:13.831 "dhchap_ctrlr_key": "key1", 00:17:13.831 "method": "bdev_nvme_attach_controller", 00:17:13.831 "req_id": 1 00:17:13.831 } 00:17:13.831 Got JSON-RPC error response 00:17:13.831 response: 00:17:13.831 { 00:17:13.831 "code": -5, 00:17:13.831 "message": "Input/output error" 00:17:13.831 } 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:13.831 23:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:14.089 00:17:14.089 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:14.089 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:14.089 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.347 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.347 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.347 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2329023 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2329023 ']' 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2329023 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329023 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329023' 00:17:14.604 killing process with pid 2329023 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2329023 00:17:14.604 23:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2329023 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.168 rmmod nvme_tcp 00:17:15.168 rmmod nvme_fabrics 00:17:15.168 rmmod nvme_keyring 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:15.168 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2351688 ']' 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2351688 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2351688 ']' 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2351688 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2351688 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2351688' 00:17:15.169 killing process with pid 2351688 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2351688 00:17:15.169 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2351688 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.426 23:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.952 23:20:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.952 23:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.EJD /tmp/spdk.key-sha256.Ngs /tmp/spdk.key-sha384.RIG /tmp/spdk.key-sha512.7U6 /tmp/spdk.key-sha512.il1 /tmp/spdk.key-sha384.n7Z /tmp/spdk.key-sha256.hgl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:17.952 00:17:17.952 real 3m10.809s 00:17:17.952 user 7m24.480s 00:17:17.952 sys 0m25.110s 00:17:17.952 23:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.952 23:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.952 ************************************ 00:17:17.952 END TEST nvmf_auth_target 00:17:17.952 ************************************ 00:17:17.952 23:20:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.952 23:20:32 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:17.952 23:20:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:17.952 23:20:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:17.952 23:20:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.952 23:20:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.952 ************************************ 00:17:17.952 START TEST nvmf_bdevio_no_huge 00:17:17.952 ************************************ 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:17.952 * Looking for test storage... 00:17:17.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:17.952 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.953 23:20:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:19.856 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:19.856 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:19.856 Found net devices under 0000:84:00.0: cvl_0_0 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:19.856 Found net devices under 0000:84:00.1: cvl_0_1 00:17:19.856 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:17:19.857 00:17:19.857 --- 10.0.0.2 ping statistics --- 00:17:19.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.857 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:17:19.857 00:17:19.857 --- 10.0.0.1 ping statistics --- 00:17:19.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.857 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2354480 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2354480 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2354480 ']' 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.857 23:20:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.857 [2024-07-15 23:20:34.990608] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:19.857 [2024-07-15 23:20:34.990689] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:19.857 [2024-07-15 23:20:35.065412] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.115 [2024-07-15 23:20:35.190734] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.115 [2024-07-15 23:20:35.190819] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.115 [2024-07-15 23:20:35.190835] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.115 [2024-07-15 23:20:35.190848] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.115 [2024-07-15 23:20:35.190860] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.115 [2024-07-15 23:20:35.190929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:20.115 [2024-07-15 23:20:35.190986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:20.115 [2024-07-15 23:20:35.191035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:20.115 [2024-07-15 23:20:35.191039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.681 [2024-07-15 23:20:35.986261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.681 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.939 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.939 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.939 23:20:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.939 Malloc0 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.939 [2024-07-15 23:20:36.024120] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.939 { 00:17:20.939 "params": { 00:17:20.939 "name": "Nvme$subsystem", 00:17:20.939 "trtype": "$TEST_TRANSPORT", 00:17:20.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.939 "adrfam": "ipv4", 00:17:20.939 "trsvcid": "$NVMF_PORT", 00:17:20.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.939 "hdgst": ${hdgst:-false}, 00:17:20.939 "ddgst": ${ddgst:-false} 00:17:20.939 }, 00:17:20.939 "method": "bdev_nvme_attach_controller" 00:17:20.939 } 00:17:20.939 EOF 00:17:20.939 )") 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:20.939 23:20:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.939 "params": { 00:17:20.939 "name": "Nvme1", 00:17:20.939 "trtype": "tcp", 00:17:20.939 "traddr": "10.0.0.2", 00:17:20.939 "adrfam": "ipv4", 00:17:20.940 "trsvcid": "4420", 00:17:20.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.940 "hdgst": false, 00:17:20.940 "ddgst": false 00:17:20.940 }, 00:17:20.940 "method": "bdev_nvme_attach_controller" 00:17:20.940 }' 00:17:20.940 [2024-07-15 23:20:36.068707] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:20.940 [2024-07-15 23:20:36.068823] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2354638 ] 00:17:20.940 [2024-07-15 23:20:36.135544] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.940 [2024-07-15 23:20:36.250943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.940 [2024-07-15 23:20:36.250967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.940 [2024-07-15 23:20:36.250971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.198 I/O targets: 00:17:21.198 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:21.198 00:17:21.198 00:17:21.198 CUnit - A unit testing framework for C - Version 2.1-3 00:17:21.198 http://cunit.sourceforge.net/ 00:17:21.198 00:17:21.198 00:17:21.198 Suite: bdevio tests on: Nvme1n1 00:17:21.198 Test: blockdev write read block ...passed 00:17:21.455 Test: blockdev write zeroes read block ...passed 00:17:21.455 Test: blockdev write zeroes read no split ...passed 00:17:21.455 Test: blockdev write zeroes read split ...passed 00:17:21.455 Test: blockdev write zeroes read split partial ...passed 00:17:21.455 Test: blockdev reset ...[2024-07-15 23:20:36.624338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:21.455 [2024-07-15 23:20:36.624462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1334860 (9): Bad file descriptor 00:17:21.455 [2024-07-15 23:20:36.759005] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:21.455 passed 00:17:21.455 Test: blockdev write read 8 blocks ...passed 00:17:21.455 Test: blockdev write read size > 128k ...passed 00:17:21.455 Test: blockdev write read invalid size ...passed 00:17:21.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.713 Test: blockdev write read max offset ...passed 00:17:21.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.713 Test: blockdev writev readv 8 blocks ...passed 00:17:21.713 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.713 Test: blockdev writev readv block ...passed 00:17:21.713 Test: blockdev writev readv size > 128k ...passed 00:17:21.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.713 Test: blockdev comparev and writev ...[2024-07-15 23:20:36.936323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.936358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.936383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.936937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.936962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.936985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.937001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.937524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.937547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.937570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.937586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.713 [2024-07-15 23:20:36.938119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.713 [2024-07-15 23:20:36.938143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.714 [2024-07-15 23:20:36.938173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.714 [2024-07-15 23:20:36.938190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.714 passed 00:17:21.714 Test: blockdev nvme passthru rw ...passed 00:17:21.714 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:20:37.022213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.714 [2024-07-15 23:20:37.022239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.714 [2024-07-15 23:20:37.022496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.714 [2024-07-15 23:20:37.022520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.714 [2024-07-15 23:20:37.022870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.714 [2024-07-15 23:20:37.022894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.714 [2024-07-15 23:20:37.023241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.714 [2024-07-15 23:20:37.023264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.714 passed 00:17:21.972 Test: blockdev nvme admin passthru ...passed 00:17:21.972 Test: blockdev copy ...passed 00:17:21.972 00:17:21.972 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.972 suites 1 1 n/a 0 0 00:17:21.972 tests 23 23 23 0 0 00:17:21.972 asserts 152 152 152 0 n/a 00:17:21.972 00:17:21.972 Elapsed time = 1.254 seconds 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.230 rmmod nvme_tcp 00:17:22.230 rmmod nvme_fabrics 00:17:22.230 rmmod nvme_keyring 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2354480 ']' 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2354480 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2354480 ']' 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2354480 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.230 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354480 00:17:22.488 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:22.488 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:22.488 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354480' 00:17:22.488 killing process with pid 2354480 00:17:22.488 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2354480 00:17:22.488 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2354480 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.746 23:20:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.307 23:20:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.307 00:17:25.307 real 0m7.281s 00:17:25.307 user 0m13.886s 00:17:25.307 sys 0m2.538s 00:17:25.307 23:20:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.307 23:20:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 ************************************ 00:17:25.307 END TEST nvmf_bdevio_no_huge 00:17:25.307 ************************************ 00:17:25.307 23:20:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.307 23:20:40 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.307 23:20:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.307 23:20:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.307 23:20:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 ************************************ 00:17:25.307 START TEST nvmf_tls 00:17:25.307 ************************************ 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.307 * Looking for test storage... 00:17:25.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.307 23:20:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:27.211 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:27.211 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.211 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:27.212 Found net devices under 0000:84:00.0: cvl_0_0 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:27.212 Found net devices under 0000:84:00.1: cvl_0_1 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:17:27.212 00:17:27.212 --- 10.0.0.2 ping statistics --- 00:17:27.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.212 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:17:27.212 00:17:27.212 --- 10.0.0.1 ping statistics --- 00:17:27.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.212 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2356839 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2356839 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2356839 ']' 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.212 23:20:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.212 [2024-07-15 23:20:42.286330] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:27.212 [2024-07-15 23:20:42.286414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.212 [2024-07-15 23:20:42.357718] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.212 [2024-07-15 23:20:42.472868] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.212 [2024-07-15 23:20:42.472921] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.212 [2024-07-15 23:20:42.472950] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.212 [2024-07-15 23:20:42.472962] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.212 [2024-07-15 23:20:42.472971] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.212 [2024-07-15 23:20:42.472998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:28.145 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:28.404 true 00:17:28.404 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.404 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:28.662 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:28.662 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:28.662 23:20:43 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.920 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.920 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:29.177 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:29.178 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:29.178 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:29.436 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.436 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:29.694 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:29.694 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:29.694 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.694 23:20:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:29.951 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:29.951 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:29.951 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:30.209 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.209 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:30.466 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:30.466 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:30.466 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:30.734 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.735 23:20:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.QJntQf9IX5 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5K6AewGikH 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.QJntQf9IX5 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5K6AewGikH 00:17:30.995 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.251 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:31.838 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.QJntQf9IX5 00:17:31.838 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QJntQf9IX5 00:17:31.838 23:20:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.838 [2024-07-15 23:20:47.119440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.838 23:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.095 23:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.350 [2024-07-15 23:20:47.600758] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.350 [2024-07-15 23:20:47.601006] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.350 23:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.606 malloc0 00:17:32.606 23:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:32.862 23:20:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QJntQf9IX5 00:17:33.118 [2024-07-15 23:20:48.322978] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:33.118 23:20:48 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QJntQf9IX5 00:17:33.118 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.310 Initializing NVMe Controllers 00:17:45.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.310 Initialization complete. Launching workers. 00:17:45.310 ======================================================== 00:17:45.310 Latency(us) 00:17:45.310 Device Information : IOPS MiB/s Average min max 00:17:45.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7560.88 29.53 8467.51 1267.05 10179.37 00:17:45.310 ======================================================== 00:17:45.310 Total : 7560.88 29.53 8467.51 1267.05 10179.37 00:17:45.310 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QJntQf9IX5 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QJntQf9IX5' 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2358740 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2358740 /var/tmp/bdevperf.sock 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2358740 ']' 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.310 [2024-07-15 23:20:58.498692] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:45.310 [2024-07-15 23:20:58.498774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358740 ] 00:17:45.310 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.310 [2024-07-15 23:20:58.557570] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.310 [2024-07-15 23:20:58.664649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:45.310 23:20:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QJntQf9IX5 00:17:45.310 [2024-07-15 23:20:59.019431] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.310 [2024-07-15 23:20:59.019564] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.310 TLSTESTn1 00:17:45.310 23:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.310 Running I/O for 10 seconds... 00:17:55.279 00:17:55.279 Latency(us) 00:17:55.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.279 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.279 Verification LBA range: start 0x0 length 0x2000 00:17:55.279 TLSTESTn1 : 10.03 3539.15 13.82 0.00 0.00 36097.23 7427.41 59807.67 00:17:55.279 =================================================================================================================== 00:17:55.279 Total : 3539.15 13.82 0.00 0.00 36097.23 7427.41 59807.67 00:17:55.279 0 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2358740 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2358740 ']' 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2358740 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2358740 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2358740' 00:17:55.279 killing process with pid 2358740 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2358740 00:17:55.279 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.279 00:17:55.279 Latency(us) 00:17:55.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.279 =================================================================================================================== 00:17:55.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.279 [2024-07-15 23:21:09.320380] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2358740 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5K6AewGikH 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5K6AewGikH 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5K6AewGikH 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5K6AewGikH' 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2360673 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2360673 /var/tmp/bdevperf.sock 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360673 ']' 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 [2024-07-15 23:21:09.608767] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:55.279 [2024-07-15 23:21:09.608851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360673 ] 00:17:55.279 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.279 [2024-07-15 23:21:09.666254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.279 [2024-07-15 23:21:09.772463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.279 23:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5K6AewGikH 00:17:55.280 [2024-07-15 23:21:10.117566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.280 [2024-07-15 23:21:10.117706] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.280 [2024-07-15 23:21:10.123479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:55.280 [2024-07-15 23:21:10.123999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2850 (107): Transport endpoint is not connected 00:17:55.280 [2024-07-15 23:21:10.124989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2850 (9): Bad file descriptor 00:17:55.280 [2024-07-15 23:21:10.125987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.280 [2024-07-15 23:21:10.126008] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:55.280 [2024-07-15 23:21:10.126040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.280 request: 00:17:55.280 { 00:17:55.280 "name": "TLSTEST", 00:17:55.280 "trtype": "tcp", 00:17:55.280 "traddr": "10.0.0.2", 00:17:55.280 "adrfam": "ipv4", 00:17:55.280 "trsvcid": "4420", 00:17:55.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.280 "prchk_reftag": false, 00:17:55.280 "prchk_guard": false, 00:17:55.280 "hdgst": false, 00:17:55.280 "ddgst": false, 00:17:55.280 "psk": "/tmp/tmp.5K6AewGikH", 00:17:55.280 "method": "bdev_nvme_attach_controller", 00:17:55.280 "req_id": 1 00:17:55.280 } 00:17:55.280 Got JSON-RPC error response 00:17:55.280 response: 00:17:55.280 { 00:17:55.280 "code": -5, 00:17:55.280 "message": "Input/output error" 00:17:55.280 } 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2360673 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360673 ']' 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360673 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360673 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360673' 00:17:55.280 killing process with pid 2360673 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360673 00:17:55.280 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.280 00:17:55.280 Latency(us) 00:17:55.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.280 =================================================================================================================== 00:17:55.280 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.280 [2024-07-15 23:21:10.176892] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360673 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QJntQf9IX5 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QJntQf9IX5 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QJntQf9IX5 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QJntQf9IX5' 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2360817 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2360817 /var/tmp/bdevperf.sock 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360817 ']' 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.280 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 [2024-07-15 23:21:10.477375] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:55.280 [2024-07-15 23:21:10.477453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360817 ] 00:17:55.280 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.280 [2024-07-15 23:21:10.536083] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.538 [2024-07-15 23:21:10.644471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.538 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.538 23:21:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.538 23:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.QJntQf9IX5 00:17:55.795 [2024-07-15 23:21:10.985762] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.795 [2024-07-15 23:21:10.985896] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.796 [2024-07-15 23:21:10.993388] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:55.796 [2024-07-15 23:21:10.993422] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:55.796 [2024-07-15 23:21:10.993480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:55.796 [2024-07-15 23:21:10.994157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a850 (107): Transport endpoint is not connected 00:17:55.796 [2024-07-15 23:21:10.995147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a850 (9): Bad file descriptor 00:17:55.796 [2024-07-15 23:21:10.996147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.796 [2024-07-15 23:21:10.996165] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:55.796 [2024-07-15 23:21:10.996197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.796 request: 00:17:55.796 { 00:17:55.796 "name": "TLSTEST", 00:17:55.796 "trtype": "tcp", 00:17:55.796 "traddr": "10.0.0.2", 00:17:55.796 "adrfam": "ipv4", 00:17:55.796 "trsvcid": "4420", 00:17:55.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.796 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:55.796 "prchk_reftag": false, 00:17:55.796 "prchk_guard": false, 00:17:55.796 "hdgst": false, 00:17:55.796 "ddgst": false, 00:17:55.796 "psk": "/tmp/tmp.QJntQf9IX5", 00:17:55.796 "method": "bdev_nvme_attach_controller", 00:17:55.796 "req_id": 1 00:17:55.796 } 00:17:55.796 Got JSON-RPC error response 00:17:55.796 response: 00:17:55.796 { 00:17:55.796 "code": -5, 00:17:55.796 "message": "Input/output error" 00:17:55.796 } 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2360817 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360817 ']' 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360817 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360817 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360817' 00:17:55.796 killing process with pid 2360817 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360817 00:17:55.796 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.796 00:17:55.796 Latency(us) 00:17:55.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.796 =================================================================================================================== 00:17:55.796 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.796 [2024-07-15 23:21:11.045001] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.796 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360817 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QJntQf9IX5 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QJntQf9IX5 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QJntQf9IX5 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QJntQf9IX5' 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2360948 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2360948 /var/tmp/bdevperf.sock 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360948 ']' 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.054 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.055 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.055 [2024-07-15 23:21:11.344616] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:56.055 [2024-07-15 23:21:11.344696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360948 ] 00:17:56.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.312 [2024-07-15 23:21:11.402401] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.312 [2024-07-15 23:21:11.507523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.312 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.312 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:56.312 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QJntQf9IX5 00:17:56.569 [2024-07-15 23:21:11.833268] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.569 [2024-07-15 23:21:11.833398] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:56.569 [2024-07-15 23:21:11.845544] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:56.569 [2024-07-15 23:21:11.845577] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:56.569 [2024-07-15 23:21:11.845635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.569 [2024-07-15 23:21:11.845904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229d850 (107): Transport endpoint is not connected 00:17:56.569 [2024-07-15 23:21:11.846894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229d850 (9): Bad file descriptor 00:17:56.569 [2024-07-15 23:21:11.847894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:56.569 [2024-07-15 23:21:11.847913] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.569 [2024-07-15 23:21:11.847932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:56.569 request: 00:17:56.569 { 00:17:56.569 "name": "TLSTEST", 00:17:56.569 "trtype": "tcp", 00:17:56.569 "traddr": "10.0.0.2", 00:17:56.569 "adrfam": "ipv4", 00:17:56.569 "trsvcid": "4420", 00:17:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:56.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.569 "prchk_reftag": false, 00:17:56.569 "prchk_guard": false, 00:17:56.569 "hdgst": false, 00:17:56.569 "ddgst": false, 00:17:56.569 "psk": "/tmp/tmp.QJntQf9IX5", 00:17:56.569 "method": "bdev_nvme_attach_controller", 00:17:56.569 "req_id": 1 00:17:56.569 } 00:17:56.569 Got JSON-RPC error response 00:17:56.569 response: 00:17:56.569 { 00:17:56.569 "code": -5, 00:17:56.569 "message": "Input/output error" 00:17:56.569 } 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2360948 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360948 ']' 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360948 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.569 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360948 00:17:56.827 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:56.827 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:56.827 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360948' 00:17:56.827 killing process with pid 2360948 00:17:56.827 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360948 00:17:56.827 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.827 00:17:56.827 Latency(us) 00:17:56.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.827 =================================================================================================================== 00:17:56.827 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.827 [2024-07-15 23:21:11.897153] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.827 23:21:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360948 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2360975 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2360975 /var/tmp/bdevperf.sock 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360975 ']' 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.086 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.086 [2024-07-15 23:21:12.203597] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:57.086 [2024-07-15 23:21:12.203675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360975 ] 00:17:57.086 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.086 [2024-07-15 23:21:12.262330] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.086 [2024-07-15 23:21:12.374628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.345 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.345 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.345 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:57.602 [2024-07-15 23:21:12.727896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.602 [2024-07-15 23:21:12.729173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7ffb0 (9): Bad file descriptor 00:17:57.602 [2024-07-15 23:21:12.730175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.602 [2024-07-15 23:21:12.730196] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.602 [2024-07-15 23:21:12.730228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.602 request: 00:17:57.602 { 00:17:57.602 "name": "TLSTEST", 00:17:57.602 "trtype": "tcp", 00:17:57.602 "traddr": "10.0.0.2", 00:17:57.602 "adrfam": "ipv4", 00:17:57.602 "trsvcid": "4420", 00:17:57.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.602 "prchk_reftag": false, 00:17:57.602 "prchk_guard": false, 00:17:57.602 "hdgst": false, 00:17:57.602 "ddgst": false, 00:17:57.602 "method": "bdev_nvme_attach_controller", 00:17:57.602 "req_id": 1 00:17:57.602 } 00:17:57.602 Got JSON-RPC error response 00:17:57.602 response: 00:17:57.602 { 00:17:57.602 "code": -5, 00:17:57.603 "message": "Input/output error" 00:17:57.603 } 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2360975 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360975 ']' 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360975 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360975 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360975' 00:17:57.603 killing process with pid 2360975 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360975 00:17:57.603 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.603 00:17:57.603 Latency(us) 00:17:57.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.603 =================================================================================================================== 00:17:57.603 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.603 23:21:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360975 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2356839 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2356839 ']' 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2356839 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356839 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356839' 00:17:57.860 killing process with pid 2356839 00:17:57.860 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2356839 00:17:57.860 [2024-07-15 23:21:13.066583] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:57.861 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2356839 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.81Rz786qYM 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.81Rz786qYM 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2361168 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2361168 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2361168 ']' 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.117 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.118 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.118 23:21:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.375 [2024-07-15 23:21:13.471626] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:17:58.375 [2024-07-15 23:21:13.471728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.375 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.375 [2024-07-15 23:21:13.539885] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.375 [2024-07-15 23:21:13.655176] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.375 [2024-07-15 23:21:13.655243] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.375 [2024-07-15 23:21:13.655260] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.375 [2024-07-15 23:21:13.655274] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.375 [2024-07-15 23:21:13.655286] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.375 [2024-07-15 23:21:13.655333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.81Rz786qYM 00:17:59.308 23:21:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.566 [2024-07-15 23:21:14.653012] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.566 23:21:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.823 23:21:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:00.081 [2024-07-15 23:21:15.146308] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.081 [2024-07-15 23:21:15.146535] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.081 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.339 malloc0 00:18:00.339 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:00.598 [2024-07-15 23:21:15.883684] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81Rz786qYM 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.81Rz786qYM' 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2361527 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2361527 /var/tmp/bdevperf.sock 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2361527 ']' 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.598 23:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.857 [2024-07-15 23:21:15.947110] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:00.857 [2024-07-15 23:21:15.947185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361527 ] 00:18:00.857 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.857 [2024-07-15 23:21:16.006400] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.857 [2024-07-15 23:21:16.114733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.115 23:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.115 23:21:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:01.115 23:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:01.404 [2024-07-15 23:21:16.448349] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.405 [2024-07-15 23:21:16.448477] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:01.405 TLSTESTn1 00:18:01.405 23:21:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.405 Running I/O for 10 seconds... 00:18:11.400 00:18:11.400 Latency(us) 00:18:11.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.400 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.400 Verification LBA range: start 0x0 length 0x2000 00:18:11.400 TLSTESTn1 : 10.02 3676.76 14.36 0.00 0.00 34751.92 6140.97 81944.27 00:18:11.400 =================================================================================================================== 00:18:11.400 Total : 3676.76 14.36 0.00 0.00 34751.92 6140.97 81944.27 00:18:11.400 0 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2361527 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2361527 ']' 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2361527 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.400 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2361527 00:18:11.663 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:11.663 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:11.663 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2361527' 00:18:11.663 killing process with pid 2361527 00:18:11.663 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2361527 00:18:11.663 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.663 00:18:11.663 Latency(us) 00:18:11.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.663 =================================================================================================================== 00:18:11.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.663 [2024-07-15 23:21:26.720972] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:11.663 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2361527 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.81Rz786qYM 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81Rz786qYM 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81Rz786qYM 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81Rz786qYM 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.81Rz786qYM' 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2362767 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2362767 /var/tmp/bdevperf.sock 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2362767 ']' 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.922 23:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.922 [2024-07-15 23:21:27.035846] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:11.922 [2024-07-15 23:21:27.035928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362767 ] 00:18:11.922 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.922 [2024-07-15 23:21:27.098603] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.922 [2024-07-15 23:21:27.216480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.181 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.181 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:12.181 23:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:12.440 [2024-07-15 23:21:27.572834] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.440 [2024-07-15 23:21:27.572925] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:12.440 [2024-07-15 23:21:27.572941] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.81Rz786qYM 00:18:12.440 request: 00:18:12.440 { 00:18:12.440 "name": "TLSTEST", 00:18:12.440 "trtype": "tcp", 00:18:12.440 "traddr": "10.0.0.2", 00:18:12.440 "adrfam": "ipv4", 00:18:12.440 "trsvcid": "4420", 00:18:12.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.440 "prchk_reftag": false, 00:18:12.440 "prchk_guard": false, 00:18:12.440 "hdgst": false, 00:18:12.440 "ddgst": false, 00:18:12.440 "psk": "/tmp/tmp.81Rz786qYM", 00:18:12.440 "method": "bdev_nvme_attach_controller", 00:18:12.440 "req_id": 1 00:18:12.440 } 00:18:12.440 Got JSON-RPC error response 00:18:12.440 response: 00:18:12.440 { 00:18:12.440 "code": -1, 00:18:12.440 "message": "Operation not permitted" 00:18:12.440 } 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2362767 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2362767 ']' 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2362767 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362767 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362767' 00:18:12.440 killing process with pid 2362767 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2362767 00:18:12.440 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.440 00:18:12.440 Latency(us) 00:18:12.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.440 =================================================================================================================== 00:18:12.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.440 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2362767 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2361168 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2361168 ']' 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2361168 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2361168 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2361168' 00:18:12.699 killing process with pid 2361168 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2361168 00:18:12.699 [2024-07-15 23:21:27.911018] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:12.699 23:21:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2361168 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2362990 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2362990 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2362990 ']' 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.958 23:21:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.958 [2024-07-15 23:21:28.263290] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:12.958 [2024-07-15 23:21:28.263378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.216 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.216 [2024-07-15 23:21:28.331795] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.216 [2024-07-15 23:21:28.446828] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.216 [2024-07-15 23:21:28.446896] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.216 [2024-07-15 23:21:28.446912] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.216 [2024-07-15 23:21:28.446925] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.216 [2024-07-15 23:21:28.446937] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.216 [2024-07-15 23:21:28.446976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.81Rz786qYM 00:18:14.151 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.151 [2024-07-15 23:21:29.453285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.408 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.409 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.666 [2024-07-15 23:21:29.942559] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.666 [2024-07-15 23:21:29.942791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.666 23:21:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.924 malloc0 00:18:14.924 23:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.181 23:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:15.439 [2024-07-15 23:21:30.696820] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:15.439 [2024-07-15 23:21:30.696864] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:15.439 [2024-07-15 23:21:30.696903] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:15.439 request: 00:18:15.439 { 00:18:15.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.439 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.439 "psk": "/tmp/tmp.81Rz786qYM", 00:18:15.439 "method": "nvmf_subsystem_add_host", 00:18:15.439 "req_id": 1 00:18:15.439 } 00:18:15.439 Got JSON-RPC error response 00:18:15.439 response: 00:18:15.439 { 00:18:15.439 "code": -32603, 00:18:15.439 "message": "Internal error" 00:18:15.439 } 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2362990 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2362990 ']' 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2362990 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362990 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362990' 00:18:15.439 killing process with pid 2362990 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2362990 00:18:15.439 23:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2362990 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.81Rz786qYM 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2363300 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2363300 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363300 ']' 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.005 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.006 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.006 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.006 [2024-07-15 23:21:31.077677] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:16.006 [2024-07-15 23:21:31.077759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.006 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.006 [2024-07-15 23:21:31.143788] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.006 [2024-07-15 23:21:31.266922] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.006 [2024-07-15 23:21:31.266978] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.006 [2024-07-15 23:21:31.267016] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.006 [2024-07-15 23:21:31.267036] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.006 [2024-07-15 23:21:31.267046] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.006 [2024-07-15 23:21:31.267099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.81Rz786qYM 00:18:16.263 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.521 [2024-07-15 23:21:31.656051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.521 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.779 23:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.037 [2024-07-15 23:21:32.149391] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.037 [2024-07-15 23:21:32.149631] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.037 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.296 malloc0 00:18:17.296 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.553 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:17.812 [2024-07-15 23:21:32.911298] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2363579 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2363579 /var/tmp/bdevperf.sock 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363579 ']' 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.812 23:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.812 [2024-07-15 23:21:32.973394] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:17.812 [2024-07-15 23:21:32.973466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363579 ] 00:18:17.812 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.812 [2024-07-15 23:21:33.031098] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.071 [2024-07-15 23:21:33.137431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.071 23:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.071 23:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.071 23:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:18.327 [2024-07-15 23:21:33.469908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.327 [2024-07-15 23:21:33.470048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.327 TLSTESTn1 00:18:18.327 23:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:18.585 23:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:18.585 "subsystems": [ 00:18:18.585 { 00:18:18.585 "subsystem": "keyring", 00:18:18.585 "config": [] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "iobuf", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "iobuf_set_options", 00:18:18.585 "params": { 00:18:18.585 "small_pool_count": 8192, 00:18:18.585 "large_pool_count": 1024, 00:18:18.585 "small_bufsize": 8192, 00:18:18.585 "large_bufsize": 135168 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "sock", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "sock_set_default_impl", 00:18:18.585 "params": { 00:18:18.585 "impl_name": "posix" 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "sock_impl_set_options", 00:18:18.585 "params": { 00:18:18.585 "impl_name": "ssl", 00:18:18.585 "recv_buf_size": 4096, 00:18:18.585 "send_buf_size": 4096, 00:18:18.585 "enable_recv_pipe": true, 00:18:18.585 "enable_quickack": false, 00:18:18.585 "enable_placement_id": 0, 00:18:18.585 "enable_zerocopy_send_server": true, 00:18:18.585 "enable_zerocopy_send_client": false, 00:18:18.585 "zerocopy_threshold": 0, 00:18:18.585 "tls_version": 0, 00:18:18.585 "enable_ktls": false 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "sock_impl_set_options", 00:18:18.585 "params": { 00:18:18.585 "impl_name": "posix", 00:18:18.585 "recv_buf_size": 2097152, 00:18:18.585 "send_buf_size": 2097152, 00:18:18.585 "enable_recv_pipe": true, 00:18:18.585 "enable_quickack": false, 00:18:18.585 "enable_placement_id": 0, 00:18:18.585 "enable_zerocopy_send_server": true, 00:18:18.585 "enable_zerocopy_send_client": false, 00:18:18.585 "zerocopy_threshold": 0, 00:18:18.585 "tls_version": 0, 00:18:18.585 "enable_ktls": false 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "vmd", 00:18:18.585 "config": [] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "accel", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "accel_set_options", 00:18:18.585 "params": { 00:18:18.585 "small_cache_size": 128, 00:18:18.585 "large_cache_size": 16, 00:18:18.585 "task_count": 2048, 00:18:18.585 "sequence_count": 2048, 00:18:18.585 "buf_count": 2048 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "bdev", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "bdev_set_options", 00:18:18.585 "params": { 00:18:18.585 "bdev_io_pool_size": 65535, 00:18:18.585 "bdev_io_cache_size": 256, 00:18:18.585 "bdev_auto_examine": true, 00:18:18.585 "iobuf_small_cache_size": 128, 00:18:18.585 "iobuf_large_cache_size": 16 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_raid_set_options", 00:18:18.585 "params": { 00:18:18.585 "process_window_size_kb": 1024 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_iscsi_set_options", 00:18:18.585 "params": { 00:18:18.585 "timeout_sec": 30 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_nvme_set_options", 00:18:18.585 "params": { 00:18:18.585 "action_on_timeout": "none", 00:18:18.585 "timeout_us": 0, 00:18:18.585 "timeout_admin_us": 0, 00:18:18.585 "keep_alive_timeout_ms": 10000, 00:18:18.585 "arbitration_burst": 0, 00:18:18.585 "low_priority_weight": 0, 00:18:18.585 "medium_priority_weight": 0, 00:18:18.585 "high_priority_weight": 0, 00:18:18.585 "nvme_adminq_poll_period_us": 10000, 00:18:18.585 "nvme_ioq_poll_period_us": 0, 00:18:18.585 "io_queue_requests": 0, 00:18:18.585 "delay_cmd_submit": true, 00:18:18.585 "transport_retry_count": 4, 00:18:18.585 "bdev_retry_count": 3, 00:18:18.585 "transport_ack_timeout": 0, 00:18:18.585 "ctrlr_loss_timeout_sec": 0, 00:18:18.585 "reconnect_delay_sec": 0, 00:18:18.585 "fast_io_fail_timeout_sec": 0, 00:18:18.585 "disable_auto_failback": false, 00:18:18.585 "generate_uuids": false, 00:18:18.585 "transport_tos": 0, 00:18:18.585 "nvme_error_stat": false, 00:18:18.585 "rdma_srq_size": 0, 00:18:18.585 "io_path_stat": false, 00:18:18.585 "allow_accel_sequence": false, 00:18:18.585 "rdma_max_cq_size": 0, 00:18:18.585 "rdma_cm_event_timeout_ms": 0, 00:18:18.585 "dhchap_digests": [ 00:18:18.585 "sha256", 00:18:18.585 "sha384", 00:18:18.585 "sha512" 00:18:18.585 ], 00:18:18.585 "dhchap_dhgroups": [ 00:18:18.585 "null", 00:18:18.585 "ffdhe2048", 00:18:18.585 "ffdhe3072", 00:18:18.585 "ffdhe4096", 00:18:18.585 "ffdhe6144", 00:18:18.585 "ffdhe8192" 00:18:18.585 ] 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_nvme_set_hotplug", 00:18:18.585 "params": { 00:18:18.585 "period_us": 100000, 00:18:18.585 "enable": false 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_malloc_create", 00:18:18.585 "params": { 00:18:18.585 "name": "malloc0", 00:18:18.585 "num_blocks": 8192, 00:18:18.585 "block_size": 4096, 00:18:18.585 "physical_block_size": 4096, 00:18:18.585 "uuid": "3c7e8395-6a69-46ca-82d8-935e74a9084b", 00:18:18.585 "optimal_io_boundary": 0 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_wait_for_examine" 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "nbd", 00:18:18.585 "config": [] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "scheduler", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "framework_set_scheduler", 00:18:18.585 "params": { 00:18:18.585 "name": "static" 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.586 "subsystem": "nvmf", 00:18:18.586 "config": [ 00:18:18.586 { 00:18:18.586 "method": "nvmf_set_config", 00:18:18.586 "params": { 00:18:18.586 "discovery_filter": "match_any", 00:18:18.586 "admin_cmd_passthru": { 00:18:18.586 "identify_ctrlr": false 00:18:18.586 } 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_set_max_subsystems", 00:18:18.586 "params": { 00:18:18.586 "max_subsystems": 1024 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_set_crdt", 00:18:18.586 "params": { 00:18:18.586 "crdt1": 0, 00:18:18.586 "crdt2": 0, 00:18:18.586 "crdt3": 0 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_create_transport", 00:18:18.586 "params": { 00:18:18.586 "trtype": "TCP", 00:18:18.586 "max_queue_depth": 128, 00:18:18.586 "max_io_qpairs_per_ctrlr": 127, 00:18:18.586 "in_capsule_data_size": 4096, 00:18:18.586 "max_io_size": 131072, 00:18:18.586 "io_unit_size": 131072, 00:18:18.586 "max_aq_depth": 128, 00:18:18.586 "num_shared_buffers": 511, 00:18:18.586 "buf_cache_size": 4294967295, 00:18:18.586 "dif_insert_or_strip": false, 00:18:18.586 "zcopy": false, 00:18:18.586 "c2h_success": false, 00:18:18.586 "sock_priority": 0, 00:18:18.586 "abort_timeout_sec": 1, 00:18:18.586 "ack_timeout": 0, 00:18:18.586 "data_wr_pool_size": 0 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_create_subsystem", 00:18:18.586 "params": { 00:18:18.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.586 "allow_any_host": false, 00:18:18.586 "serial_number": "SPDK00000000000001", 00:18:18.586 "model_number": "SPDK bdev Controller", 00:18:18.586 "max_namespaces": 10, 00:18:18.586 "min_cntlid": 1, 00:18:18.586 "max_cntlid": 65519, 00:18:18.586 "ana_reporting": false 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_subsystem_add_host", 00:18:18.586 "params": { 00:18:18.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.586 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.586 "psk": "/tmp/tmp.81Rz786qYM" 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_subsystem_add_ns", 00:18:18.586 "params": { 00:18:18.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.586 "namespace": { 00:18:18.586 "nsid": 1, 00:18:18.586 "bdev_name": "malloc0", 00:18:18.586 "nguid": "3C7E83956A6946CA82D8935E74A9084B", 00:18:18.586 "uuid": "3c7e8395-6a69-46ca-82d8-935e74a9084b", 00:18:18.586 "no_auto_visible": false 00:18:18.586 } 00:18:18.586 } 00:18:18.586 }, 00:18:18.586 { 00:18:18.586 "method": "nvmf_subsystem_add_listener", 00:18:18.586 "params": { 00:18:18.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.586 "listen_address": { 00:18:18.586 "trtype": "TCP", 00:18:18.586 "adrfam": "IPv4", 00:18:18.586 "traddr": "10.0.0.2", 00:18:18.586 "trsvcid": "4420" 00:18:18.586 }, 00:18:18.586 "secure_channel": true 00:18:18.586 } 00:18:18.586 } 00:18:18.586 ] 00:18:18.586 } 00:18:18.586 ] 00:18:18.586 }' 00:18:18.843 23:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:19.101 23:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:19.101 "subsystems": [ 00:18:19.101 { 00:18:19.101 "subsystem": "keyring", 00:18:19.101 "config": [] 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "subsystem": "iobuf", 00:18:19.101 "config": [ 00:18:19.101 { 00:18:19.101 "method": "iobuf_set_options", 00:18:19.101 "params": { 00:18:19.101 "small_pool_count": 8192, 00:18:19.101 "large_pool_count": 1024, 00:18:19.101 "small_bufsize": 8192, 00:18:19.101 "large_bufsize": 135168 00:18:19.101 } 00:18:19.101 } 00:18:19.101 ] 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "subsystem": "sock", 00:18:19.101 "config": [ 00:18:19.101 { 00:18:19.101 "method": "sock_set_default_impl", 00:18:19.101 "params": { 00:18:19.101 "impl_name": "posix" 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "sock_impl_set_options", 00:18:19.101 "params": { 00:18:19.101 "impl_name": "ssl", 00:18:19.101 "recv_buf_size": 4096, 00:18:19.101 "send_buf_size": 4096, 00:18:19.101 "enable_recv_pipe": true, 00:18:19.101 "enable_quickack": false, 00:18:19.101 "enable_placement_id": 0, 00:18:19.101 "enable_zerocopy_send_server": true, 00:18:19.101 "enable_zerocopy_send_client": false, 00:18:19.101 "zerocopy_threshold": 0, 00:18:19.101 "tls_version": 0, 00:18:19.101 "enable_ktls": false 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "sock_impl_set_options", 00:18:19.101 "params": { 00:18:19.101 "impl_name": "posix", 00:18:19.101 "recv_buf_size": 2097152, 00:18:19.101 "send_buf_size": 2097152, 00:18:19.101 "enable_recv_pipe": true, 00:18:19.101 "enable_quickack": false, 00:18:19.101 "enable_placement_id": 0, 00:18:19.101 "enable_zerocopy_send_server": true, 00:18:19.101 "enable_zerocopy_send_client": false, 00:18:19.101 "zerocopy_threshold": 0, 00:18:19.101 "tls_version": 0, 00:18:19.101 "enable_ktls": false 00:18:19.101 } 00:18:19.101 } 00:18:19.101 ] 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "subsystem": "vmd", 00:18:19.101 "config": [] 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "subsystem": "accel", 00:18:19.101 "config": [ 00:18:19.101 { 00:18:19.101 "method": "accel_set_options", 00:18:19.101 "params": { 00:18:19.101 "small_cache_size": 128, 00:18:19.101 "large_cache_size": 16, 00:18:19.101 "task_count": 2048, 00:18:19.101 "sequence_count": 2048, 00:18:19.101 "buf_count": 2048 00:18:19.101 } 00:18:19.101 } 00:18:19.101 ] 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "subsystem": "bdev", 00:18:19.101 "config": [ 00:18:19.101 { 00:18:19.101 "method": "bdev_set_options", 00:18:19.101 "params": { 00:18:19.101 "bdev_io_pool_size": 65535, 00:18:19.101 "bdev_io_cache_size": 256, 00:18:19.101 "bdev_auto_examine": true, 00:18:19.101 "iobuf_small_cache_size": 128, 00:18:19.101 "iobuf_large_cache_size": 16 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "bdev_raid_set_options", 00:18:19.101 "params": { 00:18:19.101 "process_window_size_kb": 1024 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "bdev_iscsi_set_options", 00:18:19.101 "params": { 00:18:19.101 "timeout_sec": 30 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "bdev_nvme_set_options", 00:18:19.101 "params": { 00:18:19.101 "action_on_timeout": "none", 00:18:19.101 "timeout_us": 0, 00:18:19.101 "timeout_admin_us": 0, 00:18:19.101 "keep_alive_timeout_ms": 10000, 00:18:19.101 "arbitration_burst": 0, 00:18:19.101 "low_priority_weight": 0, 00:18:19.101 "medium_priority_weight": 0, 00:18:19.101 "high_priority_weight": 0, 00:18:19.101 "nvme_adminq_poll_period_us": 10000, 00:18:19.101 "nvme_ioq_poll_period_us": 0, 00:18:19.101 "io_queue_requests": 512, 00:18:19.101 "delay_cmd_submit": true, 00:18:19.101 "transport_retry_count": 4, 00:18:19.101 "bdev_retry_count": 3, 00:18:19.101 "transport_ack_timeout": 0, 00:18:19.101 "ctrlr_loss_timeout_sec": 0, 00:18:19.101 "reconnect_delay_sec": 0, 00:18:19.101 "fast_io_fail_timeout_sec": 0, 00:18:19.101 "disable_auto_failback": false, 00:18:19.101 "generate_uuids": false, 00:18:19.101 "transport_tos": 0, 00:18:19.101 "nvme_error_stat": false, 00:18:19.101 "rdma_srq_size": 0, 00:18:19.101 "io_path_stat": false, 00:18:19.101 "allow_accel_sequence": false, 00:18:19.101 "rdma_max_cq_size": 0, 00:18:19.101 "rdma_cm_event_timeout_ms": 0, 00:18:19.101 "dhchap_digests": [ 00:18:19.101 "sha256", 00:18:19.101 "sha384", 00:18:19.101 "sha512" 00:18:19.101 ], 00:18:19.101 "dhchap_dhgroups": [ 00:18:19.101 "null", 00:18:19.101 "ffdhe2048", 00:18:19.101 "ffdhe3072", 00:18:19.101 "ffdhe4096", 00:18:19.101 "ffdhe6144", 00:18:19.101 "ffdhe8192" 00:18:19.101 ] 00:18:19.101 } 00:18:19.101 }, 00:18:19.101 { 00:18:19.101 "method": "bdev_nvme_attach_controller", 00:18:19.101 "params": { 00:18:19.102 "name": "TLSTEST", 00:18:19.102 "trtype": "TCP", 00:18:19.102 "adrfam": "IPv4", 00:18:19.102 "traddr": "10.0.0.2", 00:18:19.102 "trsvcid": "4420", 00:18:19.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.102 "prchk_reftag": false, 00:18:19.102 "prchk_guard": false, 00:18:19.102 "ctrlr_loss_timeout_sec": 0, 00:18:19.102 "reconnect_delay_sec": 0, 00:18:19.102 "fast_io_fail_timeout_sec": 0, 00:18:19.102 "psk": "/tmp/tmp.81Rz786qYM", 00:18:19.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.102 "hdgst": false, 00:18:19.102 "ddgst": false 00:18:19.102 } 00:18:19.102 }, 00:18:19.102 { 00:18:19.102 "method": "bdev_nvme_set_hotplug", 00:18:19.102 "params": { 00:18:19.102 "period_us": 100000, 00:18:19.102 "enable": false 00:18:19.102 } 00:18:19.102 }, 00:18:19.102 { 00:18:19.102 "method": "bdev_wait_for_examine" 00:18:19.102 } 00:18:19.102 ] 00:18:19.102 }, 00:18:19.102 { 00:18:19.102 "subsystem": "nbd", 00:18:19.102 "config": [] 00:18:19.102 } 00:18:19.102 ] 00:18:19.102 }' 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2363579 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363579 ']' 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363579 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363579 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363579' 00:18:19.102 killing process with pid 2363579 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363579 00:18:19.102 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.102 00:18:19.102 Latency(us) 00:18:19.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.102 =================================================================================================================== 00:18:19.102 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.102 [2024-07-15 23:21:34.234158] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.102 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363579 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2363300 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363300 ']' 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363300 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363300 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363300' 00:18:19.380 killing process with pid 2363300 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363300 00:18:19.380 [2024-07-15 23:21:34.520119] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:19.380 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363300 00:18:19.639 23:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:19.639 23:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.639 23:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:19.639 "subsystems": [ 00:18:19.639 { 00:18:19.639 "subsystem": "keyring", 00:18:19.639 "config": [] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "iobuf", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "iobuf_set_options", 00:18:19.639 "params": { 00:18:19.639 "small_pool_count": 8192, 00:18:19.639 "large_pool_count": 1024, 00:18:19.639 "small_bufsize": 8192, 00:18:19.639 "large_bufsize": 135168 00:18:19.639 } 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "sock", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "sock_set_default_impl", 00:18:19.639 "params": { 00:18:19.639 "impl_name": "posix" 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "sock_impl_set_options", 00:18:19.639 "params": { 00:18:19.639 "impl_name": "ssl", 00:18:19.639 "recv_buf_size": 4096, 00:18:19.639 "send_buf_size": 4096, 00:18:19.639 "enable_recv_pipe": true, 00:18:19.639 "enable_quickack": false, 00:18:19.639 "enable_placement_id": 0, 00:18:19.639 "enable_zerocopy_send_server": true, 00:18:19.639 "enable_zerocopy_send_client": false, 00:18:19.639 "zerocopy_threshold": 0, 00:18:19.639 "tls_version": 0, 00:18:19.639 "enable_ktls": false 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "sock_impl_set_options", 00:18:19.639 "params": { 00:18:19.639 "impl_name": "posix", 00:18:19.639 "recv_buf_size": 2097152, 00:18:19.639 "send_buf_size": 2097152, 00:18:19.639 "enable_recv_pipe": true, 00:18:19.639 "enable_quickack": false, 00:18:19.639 "enable_placement_id": 0, 00:18:19.639 "enable_zerocopy_send_server": true, 00:18:19.639 "enable_zerocopy_send_client": false, 00:18:19.639 "zerocopy_threshold": 0, 00:18:19.639 "tls_version": 0, 00:18:19.639 "enable_ktls": false 00:18:19.639 } 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "vmd", 00:18:19.639 "config": [] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "accel", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "accel_set_options", 00:18:19.639 "params": { 00:18:19.639 "small_cache_size": 128, 00:18:19.639 "large_cache_size": 16, 00:18:19.639 "task_count": 2048, 00:18:19.639 "sequence_count": 2048, 00:18:19.639 "buf_count": 2048 00:18:19.639 } 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "bdev", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "bdev_set_options", 00:18:19.639 "params": { 00:18:19.639 "bdev_io_pool_size": 65535, 00:18:19.639 "bdev_io_cache_size": 256, 00:18:19.639 "bdev_auto_examine": true, 00:18:19.639 "iobuf_small_cache_size": 128, 00:18:19.639 "iobuf_large_cache_size": 16 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_raid_set_options", 00:18:19.639 "params": { 00:18:19.639 "process_window_size_kb": 1024 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_iscsi_set_options", 00:18:19.639 "params": { 00:18:19.639 "timeout_sec": 30 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_nvme_set_options", 00:18:19.639 "params": { 00:18:19.639 "action_on_timeout": "none", 00:18:19.639 "timeout_us": 0, 00:18:19.639 "timeout_admin_us": 0, 00:18:19.639 "keep_alive_timeout_ms": 10000, 00:18:19.639 "arbitration_burst": 0, 00:18:19.639 "low_priority_weight": 0, 00:18:19.639 "medium_priority_weight": 0, 00:18:19.639 "high_priority_weight": 0, 00:18:19.639 "nvme_adminq_poll_period_us": 10000, 00:18:19.639 "nvme_ioq_poll_period_us": 0, 00:18:19.639 "io_queue_requests": 0, 00:18:19.639 "delay_cmd_submit": true, 00:18:19.639 "transport_retry_count": 4, 00:18:19.639 "bdev_retry_count": 3, 00:18:19.639 "transport_ack_timeout": 0, 00:18:19.639 "ctrlr_loss_timeout_sec": 0, 00:18:19.639 "reconnect_delay_sec": 0, 00:18:19.639 "fast_io_fail_timeout_sec": 0, 00:18:19.639 "disable_auto_failback": false, 00:18:19.639 "generate_uuids": false, 00:18:19.639 "transport_tos": 0, 00:18:19.639 "nvme_error_stat": false, 00:18:19.639 "rdma_srq_size": 0, 00:18:19.639 "io_path_stat": false, 00:18:19.639 "allow_accel_sequence": false, 00:18:19.639 "rdma_max_cq_size": 0, 00:18:19.639 "rdma_cm_event_timeout_ms": 0, 00:18:19.639 "dhchap_digests": [ 00:18:19.639 "sha256", 00:18:19.639 "sha384", 00:18:19.639 "sha512" 00:18:19.639 ], 00:18:19.639 "dhchap_dhgroups": [ 00:18:19.639 "null", 00:18:19.639 "ffdhe2048", 00:18:19.639 "ffdhe3072", 00:18:19.639 "ffdhe4096", 00:18:19.639 "ffdhe 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.639 6144", 00:18:19.639 "ffdhe8192" 00:18:19.639 ] 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_nvme_set_hotplug", 00:18:19.639 "params": { 00:18:19.639 "period_us": 100000, 00:18:19.639 "enable": false 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_malloc_create", 00:18:19.639 "params": { 00:18:19.639 "name": "malloc0", 00:18:19.639 "num_blocks": 8192, 00:18:19.639 "block_size": 4096, 00:18:19.639 "physical_block_size": 4096, 00:18:19.639 "uuid": "3c7e8395-6a69-46ca-82d8-935e74a9084b", 00:18:19.639 "optimal_io_boundary": 0 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "bdev_wait_for_examine" 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "nbd", 00:18:19.639 "config": [] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "scheduler", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "framework_set_scheduler", 00:18:19.639 "params": { 00:18:19.639 "name": "static" 00:18:19.639 } 00:18:19.639 } 00:18:19.639 ] 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "subsystem": "nvmf", 00:18:19.639 "config": [ 00:18:19.639 { 00:18:19.639 "method": "nvmf_set_config", 00:18:19.639 "params": { 00:18:19.639 "discovery_filter": "match_any", 00:18:19.639 "admin_cmd_passthru": { 00:18:19.639 "identify_ctrlr": false 00:18:19.639 } 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "nvmf_set_max_subsystems", 00:18:19.639 "params": { 00:18:19.639 "max_subsystems": 1024 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "nvmf_set_crdt", 00:18:19.639 "params": { 00:18:19.639 "crdt1": 0, 00:18:19.639 "crdt2": 0, 00:18:19.639 "crdt3": 0 00:18:19.639 } 00:18:19.639 }, 00:18:19.639 { 00:18:19.639 "method": "nvmf_create_transport", 00:18:19.640 "params": { 00:18:19.640 "trtype": "TCP", 00:18:19.640 "max_queue_depth": 128, 00:18:19.640 "max_io_qpairs_per_ctrlr": 127, 00:18:19.640 "in_capsule_data_size": 4096, 00:18:19.640 "max_io_size": 131072, 00:18:19.640 "io_unit_size": 131072, 00:18:19.640 "max_aq_depth": 128, 00:18:19.640 "num_shared_buffers": 511, 00:18:19.640 "buf_cache_size": 4294967295, 00:18:19.640 "dif_insert_or_strip": false, 00:18:19.640 "zcopy": false, 00:18:19.640 "c2h_success": false, 00:18:19.640 "sock_priority": 0, 00:18:19.640 "abort_timeout_sec": 1, 00:18:19.640 "ack_timeout": 0, 00:18:19.640 "data_wr_pool_size": 0 00:18:19.640 } 00:18:19.640 }, 00:18:19.640 { 00:18:19.640 "method": "nvmf_create_subsystem", 00:18:19.640 "params": { 00:18:19.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.640 "allow_any_host": false, 00:18:19.640 "serial_number": "SPDK00000000000001", 00:18:19.640 "model_number": "SPDK bdev Controller", 00:18:19.640 "max_namespaces": 10, 00:18:19.640 "min_cntlid": 1, 00:18:19.640 "max_cntlid": 65519, 00:18:19.640 "ana_reporting": false 00:18:19.640 } 00:18:19.640 }, 00:18:19.640 { 00:18:19.640 "method": "nvmf_subsystem_add_host", 00:18:19.640 "params": { 00:18:19.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.640 "host": "nqn.2016-06.io.spdk:host1", 00:18:19.640 "psk": "/tmp/tmp.81Rz786qYM" 00:18:19.640 } 00:18:19.640 }, 00:18:19.640 { 00:18:19.640 "method": "nvmf_subsystem_add_ns", 00:18:19.640 "params": { 00:18:19.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.640 "namespace": { 00:18:19.640 "nsid": 1, 00:18:19.640 "bdev_name": "malloc0", 00:18:19.640 "nguid": "3C7E83956A6946CA82D8935E74A9084B", 00:18:19.640 "uuid": "3c7e8395-6a69-46ca-82d8-935e74a9084b", 00:18:19.640 "no_auto_visible": false 00:18:19.640 } 00:18:19.640 } 00:18:19.640 }, 00:18:19.640 { 00:18:19.640 "method": "nvmf_subsystem_add_listener", 00:18:19.640 "params": { 00:18:19.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.640 "listen_address": { 00:18:19.640 "trtype": "TCP", 00:18:19.640 "adrfam": "IPv4", 00:18:19.640 "traddr": "10.0.0.2", 00:18:19.640 "trsvcid": "4420" 00:18:19.640 }, 00:18:19.640 "secure_channel": true 00:18:19.640 } 00:18:19.640 } 00:18:19.640 ] 00:18:19.640 } 00:18:19.640 ] 00:18:19.640 }' 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2363777 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2363777 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363777 ']' 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.640 23:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 [2024-07-15 23:21:34.873450] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:19.640 [2024-07-15 23:21:34.873548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.640 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.640 [2024-07-15 23:21:34.943240] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.898 [2024-07-15 23:21:35.058111] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.898 [2024-07-15 23:21:35.058189] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.898 [2024-07-15 23:21:35.058206] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.898 [2024-07-15 23:21:35.058220] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.898 [2024-07-15 23:21:35.058231] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.898 [2024-07-15 23:21:35.058327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.156 [2024-07-15 23:21:35.300744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.156 [2024-07-15 23:21:35.316684] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.156 [2024-07-15 23:21:35.332762] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.156 [2024-07-15 23:21:35.354951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2363894 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2363894 /var/tmp/bdevperf.sock 00:18:20.722 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363894 ']' 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.723 23:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:20.723 "subsystems": [ 00:18:20.723 { 00:18:20.723 "subsystem": "keyring", 00:18:20.723 "config": [] 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "subsystem": "iobuf", 00:18:20.723 "config": [ 00:18:20.723 { 00:18:20.723 "method": "iobuf_set_options", 00:18:20.723 "params": { 00:18:20.723 "small_pool_count": 8192, 00:18:20.723 "large_pool_count": 1024, 00:18:20.723 "small_bufsize": 8192, 00:18:20.723 "large_bufsize": 135168 00:18:20.723 } 00:18:20.723 } 00:18:20.723 ] 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "subsystem": "sock", 00:18:20.723 "config": [ 00:18:20.723 { 00:18:20.723 "method": "sock_set_default_impl", 00:18:20.723 "params": { 00:18:20.723 "impl_name": "posix" 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "sock_impl_set_options", 00:18:20.723 "params": { 00:18:20.723 "impl_name": "ssl", 00:18:20.723 "recv_buf_size": 4096, 00:18:20.723 "send_buf_size": 4096, 00:18:20.723 "enable_recv_pipe": true, 00:18:20.723 "enable_quickack": false, 00:18:20.723 "enable_placement_id": 0, 00:18:20.723 "enable_zerocopy_send_server": true, 00:18:20.723 "enable_zerocopy_send_client": false, 00:18:20.723 "zerocopy_threshold": 0, 00:18:20.723 "tls_version": 0, 00:18:20.723 "enable_ktls": false 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "sock_impl_set_options", 00:18:20.723 "params": { 00:18:20.723 "impl_name": "posix", 00:18:20.723 "recv_buf_size": 2097152, 00:18:20.723 "send_buf_size": 2097152, 00:18:20.723 "enable_recv_pipe": true, 00:18:20.723 "enable_quickack": false, 00:18:20.723 "enable_placement_id": 0, 00:18:20.723 "enable_zerocopy_send_server": true, 00:18:20.723 "enable_zerocopy_send_client": false, 00:18:20.723 "zerocopy_threshold": 0, 00:18:20.723 "tls_version": 0, 00:18:20.723 "enable_ktls": false 00:18:20.723 } 00:18:20.723 } 00:18:20.723 ] 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "subsystem": "vmd", 00:18:20.723 "config": [] 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "subsystem": "accel", 00:18:20.723 "config": [ 00:18:20.723 { 00:18:20.723 "method": "accel_set_options", 00:18:20.723 "params": { 00:18:20.723 "small_cache_size": 128, 00:18:20.723 "large_cache_size": 16, 00:18:20.723 "task_count": 2048, 00:18:20.723 "sequence_count": 2048, 00:18:20.723 "buf_count": 2048 00:18:20.723 } 00:18:20.723 } 00:18:20.723 ] 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "subsystem": "bdev", 00:18:20.723 "config": [ 00:18:20.723 { 00:18:20.723 "method": "bdev_set_options", 00:18:20.723 "params": { 00:18:20.723 "bdev_io_pool_size": 65535, 00:18:20.723 "bdev_io_cache_size": 256, 00:18:20.723 "bdev_auto_examine": true, 00:18:20.723 "iobuf_small_cache_size": 128, 00:18:20.723 "iobuf_large_cache_size": 16 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "bdev_raid_set_options", 00:18:20.723 "params": { 00:18:20.723 "process_window_size_kb": 1024 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "bdev_iscsi_set_options", 00:18:20.723 "params": { 00:18:20.723 "timeout_sec": 30 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "bdev_nvme_set_options", 00:18:20.723 "params": { 00:18:20.723 "action_on_timeout": "none", 00:18:20.723 "timeout_us": 0, 00:18:20.723 "timeout_admin_us": 0, 00:18:20.723 "keep_alive_timeout_ms": 10000, 00:18:20.723 "arbitration_burst": 0, 00:18:20.723 "low_priority_weight": 0, 00:18:20.723 "medium_priority_weight": 0, 00:18:20.723 "high_priority_weight": 0, 00:18:20.723 "nvme_adminq_poll_period_us": 10000, 00:18:20.723 "nvme_ioq_poll_period_us": 0, 00:18:20.723 "io_queue_requests": 512, 00:18:20.723 "delay_cmd_submit": true, 00:18:20.723 "transport_retry_count": 4, 00:18:20.723 "bdev_retry_count": 3, 00:18:20.723 "transport_ack_timeout": 0, 00:18:20.723 "ctrlr_loss_timeout_sec": 0, 00:18:20.723 "reconnect_delay_sec": 0, 00:18:20.723 "fast_io_fail_timeout_sec": 0, 00:18:20.723 "disable_auto_failback": false, 00:18:20.723 "generate_uuids": false, 00:18:20.723 "transport_tos": 0, 00:18:20.723 "nvme_error_stat": false, 00:18:20.723 "rdma_srq_size": 0, 00:18:20.723 "io_path_stat": false, 00:18:20.723 "allow_accel_sequence": false, 00:18:20.723 "rdma_max_cq_size": 0, 00:18:20.723 "rdma_cm_event_timeout_ms": 0, 00:18:20.723 "dhchap_digests": [ 00:18:20.723 "sha256", 00:18:20.723 "sha384", 00:18:20.723 "sha512" 00:18:20.723 ], 00:18:20.723 "dhchap_dhgroups": [ 00:18:20.723 "null", 00:18:20.723 "ffdhe2048", 00:18:20.723 "ffdhe3072", 00:18:20.723 "ffdhe4096", 00:18:20.723 "ffdhe6144", 00:18:20.723 "ffdhe8192" 00:18:20.723 ] 00:18:20.723 } 00:18:20.723 }, 00:18:20.723 { 00:18:20.723 "method": "bdev_nvme_attach_controller", 00:18:20.723 "params": { 00:18:20.723 "name": "TLSTEST", 00:18:20.723 "trtype": "TCP", 00:18:20.723 "adrfam": "IPv4", 00:18:20.723 "traddr": "10.0.0.2", 00:18:20.723 "trsvcid": "4420", 00:18:20.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.723 "prchk_reftag": false, 00:18:20.723 "prchk_guard": false, 00:18:20.723 "ctrlr_loss_timeout_sec": 0, 00:18:20.723 "reconnect_delay_sec": 0, 00:18:20.723 "fast_io_fail_timeout_sec": 0, 00:18:20.723 "psk": "/tmp/tmp.81Rz786qYM", 00:18:20.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.724 "hdgst": false, 00:18:20.724 "ddgst": false 00:18:20.724 } 00:18:20.724 }, 00:18:20.724 { 00:18:20.724 "method": "bdev_nvme_set_hotplug", 00:18:20.724 "params": { 00:18:20.724 "period_us": 100000, 00:18:20.724 "enable": false 00:18:20.724 } 00:18:20.724 }, 00:18:20.724 { 00:18:20.724 "method": "bdev_wait_for_examine" 00:18:20.724 } 00:18:20.724 ] 00:18:20.724 }, 00:18:20.724 { 00:18:20.724 "subsystem": "nbd", 00:18:20.724 "config": [] 00:18:20.724 } 00:18:20.724 ] 00:18:20.724 }' 00:18:20.724 23:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.724 [2024-07-15 23:21:35.859498] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:20.724 [2024-07-15 23:21:35.859577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363894 ] 00:18:20.724 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.724 [2024-07-15 23:21:35.919075] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.724 [2024-07-15 23:21:36.032287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.982 [2024-07-15 23:21:36.203275] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.982 [2024-07-15 23:21:36.203431] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.547 23:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.547 23:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:21.547 23:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.805 Running I/O for 10 seconds... 00:18:31.772 00:18:31.772 Latency(us) 00:18:31.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.772 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.772 Verification LBA range: start 0x0 length 0x2000 00:18:31.772 TLSTESTn1 : 10.02 3643.45 14.23 0.00 0.00 35070.30 9175.04 42331.40 00:18:31.772 =================================================================================================================== 00:18:31.772 Total : 3643.45 14.23 0.00 0.00 35070.30 9175.04 42331.40 00:18:31.772 0 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2363894 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363894 ']' 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363894 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363894 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363894' 00:18:31.772 killing process with pid 2363894 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363894 00:18:31.772 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.772 00:18:31.772 Latency(us) 00:18:31.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.772 =================================================================================================================== 00:18:31.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.772 [2024-07-15 23:21:47.047036] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:31.772 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363894 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2363777 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363777 ']' 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363777 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363777 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363777' 00:18:32.029 killing process with pid 2363777 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363777 00:18:32.029 [2024-07-15 23:21:47.340048] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:32.029 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363777 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2365341 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2365341 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2365341 ']' 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.593 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.594 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.594 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.594 [2024-07-15 23:21:47.673043] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:32.594 [2024-07-15 23:21:47.673135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.594 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.594 [2024-07-15 23:21:47.735827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.594 [2024-07-15 23:21:47.841745] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.594 [2024-07-15 23:21:47.841814] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.594 [2024-07-15 23:21:47.841843] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.594 [2024-07-15 23:21:47.841855] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.594 [2024-07-15 23:21:47.841865] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.594 [2024-07-15 23:21:47.841900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.81Rz786qYM 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.81Rz786qYM 00:18:32.851 23:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.107 [2024-07-15 23:21:48.218263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.107 23:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.364 23:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.620 [2024-07-15 23:21:48.723621] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.620 [2024-07-15 23:21:48.723853] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.621 23:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.877 malloc0 00:18:33.877 23:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.134 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81Rz786qYM 00:18:34.392 [2024-07-15 23:21:49.513388] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2365594 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2365594 /var/tmp/bdevperf.sock 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2365594 ']' 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.392 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.392 [2024-07-15 23:21:49.572542] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:34.392 [2024-07-15 23:21:49.572630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365594 ] 00:18:34.392 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.392 [2024-07-15 23:21:49.632605] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.650 [2024-07-15 23:21:49.748125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.650 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.650 23:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.650 23:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.81Rz786qYM 00:18:34.908 23:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.166 [2024-07-15 23:21:50.356531] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.166 nvme0n1 00:18:35.166 23:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.423 Running I/O for 1 seconds... 00:18:36.385 00:18:36.385 Latency(us) 00:18:36.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.385 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.385 Verification LBA range: start 0x0 length 0x2000 00:18:36.385 nvme0n1 : 1.03 2643.83 10.33 0.00 0.00 47765.28 8786.68 47185.92 00:18:36.385 =================================================================================================================== 00:18:36.385 Total : 2643.83 10.33 0.00 0.00 47765.28 8786.68 47185.92 00:18:36.385 0 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2365594 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2365594 ']' 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2365594 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365594 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365594' 00:18:36.385 killing process with pid 2365594 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2365594 00:18:36.385 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.385 00:18:36.385 Latency(us) 00:18:36.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.385 =================================================================================================================== 00:18:36.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.385 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2365594 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2365341 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2365341 ']' 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2365341 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365341 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365341' 00:18:36.655 killing process with pid 2365341 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2365341 00:18:36.655 [2024-07-15 23:21:51.931272] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.655 23:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2365341 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2365902 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2365902 00:18:36.921 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2365902 ']' 00:18:37.178 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.178 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.178 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.178 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.178 23:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.178 [2024-07-15 23:21:52.282731] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:37.178 [2024-07-15 23:21:52.282838] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.178 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.178 [2024-07-15 23:21:52.352888] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.178 [2024-07-15 23:21:52.468395] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.178 [2024-07-15 23:21:52.468461] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.178 [2024-07-15 23:21:52.468478] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.178 [2024-07-15 23:21:52.468492] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.178 [2024-07-15 23:21:52.468504] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.178 [2024-07-15 23:21:52.468534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.109 [2024-07-15 23:21:53.244431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.109 malloc0 00:18:38.109 [2024-07-15 23:21:53.277154] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.109 [2024-07-15 23:21:53.277415] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2366054 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2366054 /var/tmp/bdevperf.sock 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2366054 ']' 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.109 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.109 [2024-07-15 23:21:53.347847] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:38.109 [2024-07-15 23:21:53.347922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366054 ] 00:18:38.109 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.109 [2024-07-15 23:21:53.408658] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.366 [2024-07-15 23:21:53.526973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.366 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.366 23:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.366 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.81Rz786qYM 00:18:38.623 23:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.880 [2024-07-15 23:21:54.136316] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.138 nvme0n1 00:18:39.138 23:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.138 Running I/O for 1 seconds... 00:18:40.070 00:18:40.070 Latency(us) 00:18:40.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.070 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:40.070 Verification LBA range: start 0x0 length 0x2000 00:18:40.070 nvme0n1 : 1.02 2927.98 11.44 0.00 0.00 43227.94 5898.24 79614.10 00:18:40.070 =================================================================================================================== 00:18:40.070 Total : 2927.98 11.44 0.00 0.00 43227.94 5898.24 79614.10 00:18:40.070 0 00:18:40.070 23:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:40.070 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.070 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.327 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.327 23:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:40.327 "subsystems": [ 00:18:40.327 { 00:18:40.327 "subsystem": "keyring", 00:18:40.327 "config": [ 00:18:40.327 { 00:18:40.327 "method": "keyring_file_add_key", 00:18:40.327 "params": { 00:18:40.327 "name": "key0", 00:18:40.327 "path": "/tmp/tmp.81Rz786qYM" 00:18:40.327 } 00:18:40.327 } 00:18:40.327 ] 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "subsystem": "iobuf", 00:18:40.327 "config": [ 00:18:40.327 { 00:18:40.327 "method": "iobuf_set_options", 00:18:40.327 "params": { 00:18:40.327 "small_pool_count": 8192, 00:18:40.327 "large_pool_count": 1024, 00:18:40.327 "small_bufsize": 8192, 00:18:40.327 "large_bufsize": 135168 00:18:40.327 } 00:18:40.327 } 00:18:40.327 ] 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "subsystem": "sock", 00:18:40.327 "config": [ 00:18:40.327 { 00:18:40.327 "method": "sock_set_default_impl", 00:18:40.327 "params": { 00:18:40.327 "impl_name": "posix" 00:18:40.327 } 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "method": "sock_impl_set_options", 00:18:40.327 "params": { 00:18:40.327 "impl_name": "ssl", 00:18:40.327 "recv_buf_size": 4096, 00:18:40.327 "send_buf_size": 4096, 00:18:40.327 "enable_recv_pipe": true, 00:18:40.327 "enable_quickack": false, 00:18:40.327 "enable_placement_id": 0, 00:18:40.327 "enable_zerocopy_send_server": true, 00:18:40.327 "enable_zerocopy_send_client": false, 00:18:40.327 "zerocopy_threshold": 0, 00:18:40.327 "tls_version": 0, 00:18:40.327 "enable_ktls": false 00:18:40.327 } 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "method": "sock_impl_set_options", 00:18:40.327 "params": { 00:18:40.327 "impl_name": "posix", 00:18:40.327 "recv_buf_size": 2097152, 00:18:40.327 "send_buf_size": 2097152, 00:18:40.327 "enable_recv_pipe": true, 00:18:40.327 "enable_quickack": false, 00:18:40.327 "enable_placement_id": 0, 00:18:40.327 "enable_zerocopy_send_server": true, 00:18:40.327 "enable_zerocopy_send_client": false, 00:18:40.327 "zerocopy_threshold": 0, 00:18:40.327 "tls_version": 0, 00:18:40.327 "enable_ktls": false 00:18:40.327 } 00:18:40.327 } 00:18:40.327 ] 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "subsystem": "vmd", 00:18:40.327 "config": [] 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "subsystem": "accel", 00:18:40.327 "config": [ 00:18:40.327 { 00:18:40.327 "method": "accel_set_options", 00:18:40.327 "params": { 00:18:40.327 "small_cache_size": 128, 00:18:40.327 "large_cache_size": 16, 00:18:40.327 "task_count": 2048, 00:18:40.327 "sequence_count": 2048, 00:18:40.327 "buf_count": 2048 00:18:40.327 } 00:18:40.327 } 00:18:40.327 ] 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "subsystem": "bdev", 00:18:40.327 "config": [ 00:18:40.327 { 00:18:40.327 "method": "bdev_set_options", 00:18:40.327 "params": { 00:18:40.327 "bdev_io_pool_size": 65535, 00:18:40.327 "bdev_io_cache_size": 256, 00:18:40.327 "bdev_auto_examine": true, 00:18:40.327 "iobuf_small_cache_size": 128, 00:18:40.327 "iobuf_large_cache_size": 16 00:18:40.327 } 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "method": "bdev_raid_set_options", 00:18:40.327 "params": { 00:18:40.327 "process_window_size_kb": 1024 00:18:40.327 } 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "method": "bdev_iscsi_set_options", 00:18:40.327 "params": { 00:18:40.327 "timeout_sec": 30 00:18:40.327 } 00:18:40.327 }, 00:18:40.327 { 00:18:40.327 "method": "bdev_nvme_set_options", 00:18:40.328 "params": { 00:18:40.328 "action_on_timeout": "none", 00:18:40.328 "timeout_us": 0, 00:18:40.328 "timeout_admin_us": 0, 00:18:40.328 "keep_alive_timeout_ms": 10000, 00:18:40.328 "arbitration_burst": 0, 00:18:40.328 "low_priority_weight": 0, 00:18:40.328 "medium_priority_weight": 0, 00:18:40.328 "high_priority_weight": 0, 00:18:40.328 "nvme_adminq_poll_period_us": 10000, 00:18:40.328 "nvme_ioq_poll_period_us": 0, 00:18:40.328 "io_queue_requests": 0, 00:18:40.328 "delay_cmd_submit": true, 00:18:40.328 "transport_retry_count": 4, 00:18:40.328 "bdev_retry_count": 3, 00:18:40.328 "transport_ack_timeout": 0, 00:18:40.328 "ctrlr_loss_timeout_sec": 0, 00:18:40.328 "reconnect_delay_sec": 0, 00:18:40.328 "fast_io_fail_timeout_sec": 0, 00:18:40.328 "disable_auto_failback": false, 00:18:40.328 "generate_uuids": false, 00:18:40.328 "transport_tos": 0, 00:18:40.328 "nvme_error_stat": false, 00:18:40.328 "rdma_srq_size": 0, 00:18:40.328 "io_path_stat": false, 00:18:40.328 "allow_accel_sequence": false, 00:18:40.328 "rdma_max_cq_size": 0, 00:18:40.328 "rdma_cm_event_timeout_ms": 0, 00:18:40.328 "dhchap_digests": [ 00:18:40.328 "sha256", 00:18:40.328 "sha384", 00:18:40.328 "sha512" 00:18:40.328 ], 00:18:40.328 "dhchap_dhgroups": [ 00:18:40.328 "null", 00:18:40.328 "ffdhe2048", 00:18:40.328 "ffdhe3072", 00:18:40.328 "ffdhe4096", 00:18:40.328 "ffdhe6144", 00:18:40.328 "ffdhe8192" 00:18:40.328 ] 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "bdev_nvme_set_hotplug", 00:18:40.328 "params": { 00:18:40.328 "period_us": 100000, 00:18:40.328 "enable": false 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "bdev_malloc_create", 00:18:40.328 "params": { 00:18:40.328 "name": "malloc0", 00:18:40.328 "num_blocks": 8192, 00:18:40.328 "block_size": 4096, 00:18:40.328 "physical_block_size": 4096, 00:18:40.328 "uuid": "3e64bbac-0d95-46ea-bbf5-e0d5916b0c28", 00:18:40.328 "optimal_io_boundary": 0 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "bdev_wait_for_examine" 00:18:40.328 } 00:18:40.328 ] 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "subsystem": "nbd", 00:18:40.328 "config": [] 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "subsystem": "scheduler", 00:18:40.328 "config": [ 00:18:40.328 { 00:18:40.328 "method": "framework_set_scheduler", 00:18:40.328 "params": { 00:18:40.328 "name": "static" 00:18:40.328 } 00:18:40.328 } 00:18:40.328 ] 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "subsystem": "nvmf", 00:18:40.328 "config": [ 00:18:40.328 { 00:18:40.328 "method": "nvmf_set_config", 00:18:40.328 "params": { 00:18:40.328 "discovery_filter": "match_any", 00:18:40.328 "admin_cmd_passthru": { 00:18:40.328 "identify_ctrlr": false 00:18:40.328 } 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_set_max_subsystems", 00:18:40.328 "params": { 00:18:40.328 "max_subsystems": 1024 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_set_crdt", 00:18:40.328 "params": { 00:18:40.328 "crdt1": 0, 00:18:40.328 "crdt2": 0, 00:18:40.328 "crdt3": 0 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_create_transport", 00:18:40.328 "params": { 00:18:40.328 "trtype": "TCP", 00:18:40.328 "max_queue_depth": 128, 00:18:40.328 "max_io_qpairs_per_ctrlr": 127, 00:18:40.328 "in_capsule_data_size": 4096, 00:18:40.328 "max_io_size": 131072, 00:18:40.328 "io_unit_size": 131072, 00:18:40.328 "max_aq_depth": 128, 00:18:40.328 "num_shared_buffers": 511, 00:18:40.328 "buf_cache_size": 4294967295, 00:18:40.328 "dif_insert_or_strip": false, 00:18:40.328 "zcopy": false, 00:18:40.328 "c2h_success": false, 00:18:40.328 "sock_priority": 0, 00:18:40.328 "abort_timeout_sec": 1, 00:18:40.328 "ack_timeout": 0, 00:18:40.328 "data_wr_pool_size": 0 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_create_subsystem", 00:18:40.328 "params": { 00:18:40.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.328 "allow_any_host": false, 00:18:40.328 "serial_number": "00000000000000000000", 00:18:40.328 "model_number": "SPDK bdev Controller", 00:18:40.328 "max_namespaces": 32, 00:18:40.328 "min_cntlid": 1, 00:18:40.328 "max_cntlid": 65519, 00:18:40.328 "ana_reporting": false 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_subsystem_add_host", 00:18:40.328 "params": { 00:18:40.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.328 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.328 "psk": "key0" 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_subsystem_add_ns", 00:18:40.328 "params": { 00:18:40.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.328 "namespace": { 00:18:40.328 "nsid": 1, 00:18:40.328 "bdev_name": "malloc0", 00:18:40.328 "nguid": "3E64BBAC0D9546EABBF5E0D5916B0C28", 00:18:40.328 "uuid": "3e64bbac-0d95-46ea-bbf5-e0d5916b0c28", 00:18:40.328 "no_auto_visible": false 00:18:40.328 } 00:18:40.328 } 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "method": "nvmf_subsystem_add_listener", 00:18:40.328 "params": { 00:18:40.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.328 "listen_address": { 00:18:40.328 "trtype": "TCP", 00:18:40.328 "adrfam": "IPv4", 00:18:40.328 "traddr": "10.0.0.2", 00:18:40.328 "trsvcid": "4420" 00:18:40.328 }, 00:18:40.328 "secure_channel": false, 00:18:40.328 "sock_impl": "ssl" 00:18:40.328 } 00:18:40.328 } 00:18:40.328 ] 00:18:40.328 } 00:18:40.328 ] 00:18:40.328 }' 00:18:40.328 23:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:40.585 23:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:40.585 "subsystems": [ 00:18:40.585 { 00:18:40.585 "subsystem": "keyring", 00:18:40.585 "config": [ 00:18:40.585 { 00:18:40.585 "method": "keyring_file_add_key", 00:18:40.585 "params": { 00:18:40.585 "name": "key0", 00:18:40.585 "path": "/tmp/tmp.81Rz786qYM" 00:18:40.585 } 00:18:40.585 } 00:18:40.585 ] 00:18:40.585 }, 00:18:40.585 { 00:18:40.585 "subsystem": "iobuf", 00:18:40.586 "config": [ 00:18:40.586 { 00:18:40.586 "method": "iobuf_set_options", 00:18:40.586 "params": { 00:18:40.586 "small_pool_count": 8192, 00:18:40.586 "large_pool_count": 1024, 00:18:40.586 "small_bufsize": 8192, 00:18:40.586 "large_bufsize": 135168 00:18:40.586 } 00:18:40.586 } 00:18:40.586 ] 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "subsystem": "sock", 00:18:40.586 "config": [ 00:18:40.586 { 00:18:40.586 "method": "sock_set_default_impl", 00:18:40.586 "params": { 00:18:40.586 "impl_name": "posix" 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "sock_impl_set_options", 00:18:40.586 "params": { 00:18:40.586 "impl_name": "ssl", 00:18:40.586 "recv_buf_size": 4096, 00:18:40.586 "send_buf_size": 4096, 00:18:40.586 "enable_recv_pipe": true, 00:18:40.586 "enable_quickack": false, 00:18:40.586 "enable_placement_id": 0, 00:18:40.586 "enable_zerocopy_send_server": true, 00:18:40.586 "enable_zerocopy_send_client": false, 00:18:40.586 "zerocopy_threshold": 0, 00:18:40.586 "tls_version": 0, 00:18:40.586 "enable_ktls": false 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "sock_impl_set_options", 00:18:40.586 "params": { 00:18:40.586 "impl_name": "posix", 00:18:40.586 "recv_buf_size": 2097152, 00:18:40.586 "send_buf_size": 2097152, 00:18:40.586 "enable_recv_pipe": true, 00:18:40.586 "enable_quickack": false, 00:18:40.586 "enable_placement_id": 0, 00:18:40.586 "enable_zerocopy_send_server": true, 00:18:40.586 "enable_zerocopy_send_client": false, 00:18:40.586 "zerocopy_threshold": 0, 00:18:40.586 "tls_version": 0, 00:18:40.586 "enable_ktls": false 00:18:40.586 } 00:18:40.586 } 00:18:40.586 ] 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "subsystem": "vmd", 00:18:40.586 "config": [] 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "subsystem": "accel", 00:18:40.586 "config": [ 00:18:40.586 { 00:18:40.586 "method": "accel_set_options", 00:18:40.586 "params": { 00:18:40.586 "small_cache_size": 128, 00:18:40.586 "large_cache_size": 16, 00:18:40.586 "task_count": 2048, 00:18:40.586 "sequence_count": 2048, 00:18:40.586 "buf_count": 2048 00:18:40.586 } 00:18:40.586 } 00:18:40.586 ] 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "subsystem": "bdev", 00:18:40.586 "config": [ 00:18:40.586 { 00:18:40.586 "method": "bdev_set_options", 00:18:40.586 "params": { 00:18:40.586 "bdev_io_pool_size": 65535, 00:18:40.586 "bdev_io_cache_size": 256, 00:18:40.586 "bdev_auto_examine": true, 00:18:40.586 "iobuf_small_cache_size": 128, 00:18:40.586 "iobuf_large_cache_size": 16 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "bdev_raid_set_options", 00:18:40.586 "params": { 00:18:40.586 "process_window_size_kb": 1024 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "bdev_iscsi_set_options", 00:18:40.586 "params": { 00:18:40.586 "timeout_sec": 30 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "bdev_nvme_set_options", 00:18:40.586 "params": { 00:18:40.586 "action_on_timeout": "none", 00:18:40.586 "timeout_us": 0, 00:18:40.586 "timeout_admin_us": 0, 00:18:40.586 "keep_alive_timeout_ms": 10000, 00:18:40.586 "arbitration_burst": 0, 00:18:40.586 "low_priority_weight": 0, 00:18:40.586 "medium_priority_weight": 0, 00:18:40.586 "high_priority_weight": 0, 00:18:40.586 "nvme_adminq_poll_period_us": 10000, 00:18:40.586 "nvme_ioq_poll_period_us": 0, 00:18:40.586 "io_queue_requests": 512, 00:18:40.586 "delay_cmd_submit": true, 00:18:40.586 "transport_retry_count": 4, 00:18:40.586 "bdev_retry_count": 3, 00:18:40.586 "transport_ack_timeout": 0, 00:18:40.586 "ctrlr_loss_timeout_sec": 0, 00:18:40.586 "reconnect_delay_sec": 0, 00:18:40.586 "fast_io_fail_timeout_sec": 0, 00:18:40.586 "disable_auto_failback": false, 00:18:40.586 "generate_uuids": false, 00:18:40.586 "transport_tos": 0, 00:18:40.586 "nvme_error_stat": false, 00:18:40.586 "rdma_srq_size": 0, 00:18:40.586 "io_path_stat": false, 00:18:40.586 "allow_accel_sequence": false, 00:18:40.586 "rdma_max_cq_size": 0, 00:18:40.586 "rdma_cm_event_timeout_ms": 0, 00:18:40.586 "dhchap_digests": [ 00:18:40.586 "sha256", 00:18:40.586 "sha384", 00:18:40.586 "sha512" 00:18:40.586 ], 00:18:40.586 "dhchap_dhgroups": [ 00:18:40.586 "null", 00:18:40.586 "ffdhe2048", 00:18:40.586 "ffdhe3072", 00:18:40.586 "ffdhe4096", 00:18:40.586 "ffdhe6144", 00:18:40.586 "ffdhe8192" 00:18:40.586 ] 00:18:40.586 } 00:18:40.586 }, 00:18:40.586 { 00:18:40.586 "method": "bdev_nvme_attach_controller", 00:18:40.586 "params": { 00:18:40.586 "name": "nvme0", 00:18:40.586 "trtype": "TCP", 00:18:40.586 "adrfam": "IPv4", 00:18:40.586 "traddr": "10.0.0.2", 00:18:40.586 "trsvcid": "4420", 00:18:40.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.587 "prchk_reftag": false, 00:18:40.587 "prchk_guard": false, 00:18:40.587 "ctrlr_loss_timeout_sec": 0, 00:18:40.587 "reconnect_delay_sec": 0, 00:18:40.587 "fast_io_fail_timeout_sec": 0, 00:18:40.587 "psk": "key0", 00:18:40.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.587 "hdgst": false, 00:18:40.587 "ddgst": false 00:18:40.587 } 00:18:40.587 }, 00:18:40.587 { 00:18:40.587 "method": "bdev_nvme_set_hotplug", 00:18:40.587 "params": { 00:18:40.587 "period_us": 100000, 00:18:40.587 "enable": false 00:18:40.587 } 00:18:40.587 }, 00:18:40.587 { 00:18:40.587 "method": "bdev_enable_histogram", 00:18:40.587 "params": { 00:18:40.587 "name": "nvme0n1", 00:18:40.587 "enable": true 00:18:40.587 } 00:18:40.587 }, 00:18:40.587 { 00:18:40.587 "method": "bdev_wait_for_examine" 00:18:40.587 } 00:18:40.587 ] 00:18:40.587 }, 00:18:40.587 { 00:18:40.587 "subsystem": "nbd", 00:18:40.587 "config": [] 00:18:40.587 } 00:18:40.587 ] 00:18:40.587 }' 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2366054 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2366054 ']' 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2366054 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366054 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366054' 00:18:40.587 killing process with pid 2366054 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2366054 00:18:40.587 Received shutdown signal, test time was about 1.000000 seconds 00:18:40.587 00:18:40.587 Latency(us) 00:18:40.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.587 =================================================================================================================== 00:18:40.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.587 23:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2366054 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2365902 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2365902 ']' 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2365902 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365902 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365902' 00:18:40.843 killing process with pid 2365902 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2365902 00:18:40.843 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2365902 00:18:41.102 23:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:41.102 23:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.102 23:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:41.102 "subsystems": [ 00:18:41.102 { 00:18:41.102 "subsystem": "keyring", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "keyring_file_add_key", 00:18:41.102 "params": { 00:18:41.102 "name": "key0", 00:18:41.102 "path": "/tmp/tmp.81Rz786qYM" 00:18:41.102 } 00:18:41.102 } 00:18:41.102 ] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "iobuf", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "iobuf_set_options", 00:18:41.102 "params": { 00:18:41.102 "small_pool_count": 8192, 00:18:41.102 "large_pool_count": 1024, 00:18:41.102 "small_bufsize": 8192, 00:18:41.102 "large_bufsize": 135168 00:18:41.102 } 00:18:41.102 } 00:18:41.102 ] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "sock", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "sock_set_default_impl", 00:18:41.102 "params": { 00:18:41.102 "impl_name": "posix" 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "sock_impl_set_options", 00:18:41.102 "params": { 00:18:41.102 "impl_name": "ssl", 00:18:41.102 "recv_buf_size": 4096, 00:18:41.102 "send_buf_size": 4096, 00:18:41.102 "enable_recv_pipe": true, 00:18:41.102 "enable_quickack": false, 00:18:41.102 "enable_placement_id": 0, 00:18:41.102 "enable_zerocopy_send_server": true, 00:18:41.102 "enable_zerocopy_send_client": false, 00:18:41.102 "zerocopy_threshold": 0, 00:18:41.102 "tls_version": 0, 00:18:41.102 "enable_ktls": false 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "sock_impl_set_options", 00:18:41.102 "params": { 00:18:41.102 "impl_name": "posix", 00:18:41.102 "recv_buf_size": 2097152, 00:18:41.102 "send_buf_size": 2097152, 00:18:41.102 "enable_recv_pipe": true, 00:18:41.102 "enable_quickack": false, 00:18:41.102 "enable_placement_id": 0, 00:18:41.102 "enable_zerocopy_send_server": true, 00:18:41.102 "enable_zerocopy_send_client": false, 00:18:41.102 "zerocopy_threshold": 0, 00:18:41.102 "tls_version": 0, 00:18:41.102 "enable_ktls": false 00:18:41.102 } 00:18:41.102 } 00:18:41.102 ] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "vmd", 00:18:41.102 "config": [] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "accel", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "accel_set_options", 00:18:41.102 "params": { 00:18:41.102 "small_cache_size": 128, 00:18:41.102 "large_cache_size": 16, 00:18:41.102 "task_count": 2048, 00:18:41.102 "sequence_count": 2048, 00:18:41.102 "buf_count": 2048 00:18:41.102 } 00:18:41.102 } 00:18:41.102 ] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "bdev", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "bdev_set_options", 00:18:41.102 "params": { 00:18:41.102 "bdev_io_pool_size": 65535, 00:18:41.102 "bdev_io_cache_size": 256, 00:18:41.102 "bdev_auto_examine": true, 00:18:41.102 "iobuf_small_cache_size": 128, 00:18:41.102 "iobuf_large_cache_size": 16 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_raid_set_options", 00:18:41.102 "params": { 00:18:41.102 "process_window_size_kb": 1024 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_iscsi_set_options", 00:18:41.102 "params": { 00:18:41.102 "timeout_sec": 30 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_nvme_set_options", 00:18:41.102 "params": { 00:18:41.102 "action_on_timeout": "none", 00:18:41.102 "timeout_us": 0, 00:18:41.102 "timeout_admin_us": 0, 00:18:41.102 "keep_alive_timeout_ms": 10000, 00:18:41.102 "arbitration_burst": 0, 00:18:41.102 "low_priority_weight": 0, 00:18:41.102 "medium_priority_weight": 0, 00:18:41.102 "high_priority_weight": 0, 00:18:41.102 "nvme_adminq_poll_period_us": 10000, 00:18:41.102 "nvme_ioq_poll_period_us": 0, 00:18:41.102 "io_queue_requests": 0, 00:18:41.102 "delay_cmd_submit": true, 00:18:41.102 "transport_retry_count": 4, 00:18:41.102 "bdev_retry_count": 3, 00:18:41.102 "transport_ack_timeout": 0, 00:18:41.102 "ctrlr_loss_timeout_sec": 0, 00:18:41.102 "reconnect_delay_sec": 0, 00:18:41.102 "fast_io_fail_timeout_sec": 0, 00:18:41.102 "disable_auto_failback": false, 00:18:41.102 "generate_uuids": false, 00:18:41.102 "transport_tos": 0, 00:18:41.102 "nvme_error_stat": false, 00:18:41.102 "rdma_srq_size": 0, 00:18:41.102 "io_path_stat": false, 00:18:41.102 "allow_accel_sequence": false, 00:18:41.102 "rdma_max_cq_size": 0, 00:18:41.102 "rdma_cm_event_timeout_ms": 0, 00:18:41.102 "dhchap_digests": [ 00:18:41.102 "sha256", 00:18:41.102 "sha384", 00:18:41.102 "sha512" 00:18:41.102 ], 00:18:41.102 "dhchap_dhgroups": [ 00:18:41.102 "null", 00:18:41.102 "ffdhe2048", 00:18:41.102 "ffdhe3072", 00:18:41.102 "ffdhe4096", 00:18:41.102 "ffdhe6144", 00:18:41.102 "ffdhe8192" 00:18:41.102 ] 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_nvme_set_hotplug", 00:18:41.102 "params": { 00:18:41.102 "period_us": 100000, 00:18:41.102 "enable": false 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_malloc_create", 00:18:41.102 "params": { 00:18:41.102 "name": "malloc0", 00:18:41.102 "num_blocks": 8192, 00:18:41.102 "block_size": 4096, 00:18:41.102 "physical_block_size": 4096, 00:18:41.102 "uuid": "3e64bbac-0d95-46ea-bbf5-e0d5916b0c28", 00:18:41.102 "optimal_io_boundary": 0 00:18:41.102 } 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "method": "bdev_wait_for_examine" 00:18:41.102 } 00:18:41.102 ] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "nbd", 00:18:41.102 "config": [] 00:18:41.102 }, 00:18:41.102 { 00:18:41.102 "subsystem": "scheduler", 00:18:41.102 "config": [ 00:18:41.102 { 00:18:41.102 "method": "framework_set_scheduler", 00:18:41.102 "params": { 00:18:41.102 "name": "static" 00:18:41.102 } 00:18:41.103 } 00:18:41.103 ] 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "subsystem": "nvmf", 00:18:41.103 "config": [ 00:18:41.103 { 00:18:41.103 "method": "nvmf_set_config", 00:18:41.103 "params": { 00:18:41.103 "discovery_filter": "match_any", 00:18:41.103 "admin_cmd_passthru": { 00:18:41.103 "identify_ctrlr": false 00:18:41.103 } 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_set_max_subsystems", 00:18:41.103 "params": { 00:18:41.103 "max_subsystems": 1024 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_set_crdt", 00:18:41.103 "params": { 00:18:41.103 "crdt1": 0, 00:18:41.103 "crdt2": 0, 00:18:41.103 "crdt3": 0 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_create_transport", 00:18:41.103 "params": { 00:18:41.103 "trtype": "TCP", 00:18:41.103 "max_queue_depth": 128, 00:18:41.103 "max_io_qpairs_per_ctrlr": 127, 00:18:41.103 "in_capsule_data_size": 4096, 00:18:41.103 "max_io_size": 131072, 00:18:41.103 "io_unit_size": 131072, 00:18:41.103 "max_aq_depth": 128, 00:18:41.103 "num_shared_buffers": 511, 00:18:41.103 "buf_cache_size": 4294967295, 00:18:41.103 "dif_insert_or_strip": false, 00:18:41.103 "zcopy": false, 00:18:41.103 "c2h_success": false, 00:18:41.103 "sock_priority": 0, 00:18:41.103 "abort_timeout_sec": 1, 00:18:41.103 "ack_timeout": 0, 00:18:41.103 "data_wr_pool_size": 0 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_create_subsystem", 00:18:41.103 "params": { 00:18:41.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.103 "allow_any_host": false, 00:18:41.103 "serial_number": "00000000000000000000", 00:18:41.103 "model_number": "SPDK bdev Controller", 00:18:41.103 "max_namespaces": 32, 00:18:41.103 "min_cntlid": 1, 00:18:41.103 "max_cntlid": 65519, 00:18:41.103 "ana_reporting": false 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_subsystem_add_host", 00:18:41.103 "params": { 00:18:41.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.103 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.103 "psk": "key0" 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_subsystem_add_ns", 00:18:41.103 "params": { 00:18:41.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.103 "namespace": { 00:18:41.103 "nsid": 1, 00:18:41.103 "bdev_name": "malloc0", 00:18:41.103 "nguid": "3E64BBAC0D9546EABBF5E0D5916B0C28", 00:18:41.103 "uuid": "3e64bbac-0d95-46ea-bbf5-e0d5916b0c28", 00:18:41.103 "no_auto_visible": false 00:18:41.103 } 00:18:41.103 } 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "method": "nvmf_subsystem_add_listener", 00:18:41.103 "params": { 00:18:41.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.103 "listen_address": { 00:18:41.103 "trtype": "TCP", 00:18:41.103 "adrfam": "IPv4", 00:18:41.103 "traddr": "10.0.0.2", 00:18:41.103 "trsvcid": "4420" 00:18:41.103 }, 00:18:41.103 "secure_channel": false, 00:18:41.103 "sock_impl": "ssl" 00:18:41.103 } 00:18:41.103 } 00:18:41.103 ] 00:18:41.103 } 00:18:41.103 ] 00:18:41.103 }' 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2366466 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2366466 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2366466 ']' 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.103 23:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.360 [2024-07-15 23:21:56.439554] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:41.360 [2024-07-15 23:21:56.439641] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.360 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.360 [2024-07-15 23:21:56.508540] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.360 [2024-07-15 23:21:56.622297] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.360 [2024-07-15 23:21:56.622365] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.360 [2024-07-15 23:21:56.622390] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.360 [2024-07-15 23:21:56.622404] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.361 [2024-07-15 23:21:56.622416] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.361 [2024-07-15 23:21:56.622500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.617 [2024-07-15 23:21:56.872668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.618 [2024-07-15 23:21:56.904693] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.618 [2024-07-15 23:21:56.918967] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2366569 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2366569 /var/tmp/bdevperf.sock 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2366569 ']' 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:42.182 "subsystems": [ 00:18:42.182 { 00:18:42.182 "subsystem": "keyring", 00:18:42.182 "config": [ 00:18:42.182 { 00:18:42.182 "method": "keyring_file_add_key", 00:18:42.182 "params": { 00:18:42.182 "name": "key0", 00:18:42.182 "path": "/tmp/tmp.81Rz786qYM" 00:18:42.182 } 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "iobuf", 00:18:42.182 "config": [ 00:18:42.182 { 00:18:42.182 "method": "iobuf_set_options", 00:18:42.182 "params": { 00:18:42.182 "small_pool_count": 8192, 00:18:42.182 "large_pool_count": 1024, 00:18:42.182 "small_bufsize": 8192, 00:18:42.182 "large_bufsize": 135168 00:18:42.182 } 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "sock", 00:18:42.182 "config": [ 00:18:42.182 { 00:18:42.182 "method": "sock_set_default_impl", 00:18:42.182 "params": { 00:18:42.182 "impl_name": "posix" 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "sock_impl_set_options", 00:18:42.182 "params": { 00:18:42.182 "impl_name": "ssl", 00:18:42.182 "recv_buf_size": 4096, 00:18:42.182 "send_buf_size": 4096, 00:18:42.182 "enable_recv_pipe": true, 00:18:42.182 "enable_quickack": false, 00:18:42.182 "enable_placement_id": 0, 00:18:42.182 "enable_zerocopy_send_server": true, 00:18:42.182 "enable_zerocopy_send_client": false, 00:18:42.182 "zerocopy_threshold": 0, 00:18:42.182 "tls_version": 0, 00:18:42.182 "enable_ktls": false 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "sock_impl_set_options", 00:18:42.182 "params": { 00:18:42.182 "impl_name": "posix", 00:18:42.182 "recv_buf_size": 2097152, 00:18:42.182 "send_buf_size": 2097152, 00:18:42.182 "enable_recv_pipe": true, 00:18:42.182 "enable_quickack": false, 00:18:42.182 "enable_placement_id": 0, 00:18:42.182 "enable_zerocopy_send_server": true, 00:18:42.182 "enable_zerocopy_send_client": false, 00:18:42.182 "zerocopy_threshold": 0, 00:18:42.182 "tls_version": 0, 00:18:42.182 "enable_ktls": false 00:18:42.182 } 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "vmd", 00:18:42.182 "config": [] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "accel", 00:18:42.182 "config": [ 00:18:42.182 { 00:18:42.182 "method": "accel_set_options", 00:18:42.182 "params": { 00:18:42.182 "small_cache_size": 128, 00:18:42.182 "large_cache_size": 16, 00:18:42.182 "task_count": 2048, 00:18:42.182 "sequence_count": 2048, 00:18:42.182 "buf_count": 2048 00:18:42.182 } 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "bdev", 00:18:42.182 "config": [ 00:18:42.182 { 00:18:42.182 "method": "bdev_set_options", 00:18:42.182 "params": { 00:18:42.182 "bdev_io_pool_size": 65535, 00:18:42.182 "bdev_io_cache_size": 256, 00:18:42.182 "bdev_auto_examine": true, 00:18:42.182 "iobuf_small_cache_size": 128, 00:18:42.182 "iobuf_large_cache_size": 16 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_raid_set_options", 00:18:42.182 "params": { 00:18:42.182 "process_window_size_kb": 1024 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_iscsi_set_options", 00:18:42.182 "params": { 00:18:42.182 "timeout_sec": 30 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_nvme_set_options", 00:18:42.182 "params": { 00:18:42.182 "action_on_timeout": "none", 00:18:42.182 "timeout_us": 0, 00:18:42.182 "timeout_admin_us": 0, 00:18:42.182 "keep_alive_timeout_ms": 10000, 00:18:42.182 "arbitration_burst": 0, 00:18:42.182 "low_priority_weight": 0, 00:18:42.182 "medium_priority_weight": 0, 00:18:42.182 "high_priority_weight": 0, 00:18:42.182 "nvme_adminq_poll_period_us": 10000, 00:18:42.182 "nvme_ioq_poll_period_us": 0, 00:18:42.182 "io_queue_requests": 512, 00:18:42.182 "delay_cmd_submit": true, 00:18:42.182 "transport_retry_count": 4, 00:18:42.182 "bdev_retry_count": 3, 00:18:42.182 "transport_ack_timeout": 0, 00:18:42.182 "ctrlr_loss_timeout_sec": 0, 00:18:42.182 "reconnect_delay_sec": 0, 00:18:42.182 "fast_io_fail_timeout_sec": 0, 00:18:42.182 "disable_auto_failback": false, 00:18:42.182 "generate_uuids": false, 00:18:42.182 "transport_tos": 0, 00:18:42.182 "nvme_error_stat": false, 00:18:42.182 "rdma_srq_size": 0, 00:18:42.182 "io_path_stat": false, 00:18:42.182 "allow_accel_sequence": false, 00:18:42.182 "rdma_max_cq_size": 0, 00:18:42.182 "rdma_cm_event_timeout_ms": 0, 00:18:42.182 "dhchap_digests": [ 00:18:42.182 "sha256", 00:18:42.182 "sha384", 00:18:42.182 "sha512" 00:18:42.182 ], 00:18:42.182 "dhchap_dhgroups": [ 00:18:42.182 "null", 00:18:42.182 "ffdhe2048", 00:18:42.182 "ffdhe3072", 00:18:42.182 "ffdhe4096", 00:18:42.182 "ffdhe6144", 00:18:42.182 "ffdhe8192" 00:18:42.182 ] 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_nvme_attach_controller", 00:18:42.182 "params": { 00:18:42.182 "name": "nvme0", 00:18:42.182 "trtype": "TCP", 00:18:42.182 "adrfam": "IPv4", 00:18:42.182 "traddr": "10.0.0.2", 00:18:42.182 "trsvcid": "4420", 00:18:42.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.182 "prchk_reftag": false, 00:18:42.182 "prchk_guard": false, 00:18:42.182 "ctrlr_loss_timeout_sec": 0, 00:18:42.182 "reconnect_delay_sec": 0, 00:18:42.182 "fast_io_fail_timeout_sec": 0, 00:18:42.182 "psk": "key0", 00:18:42.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.182 "hdgst": false, 00:18:42.182 "ddgst": false 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_nvme_set_hotplug", 00:18:42.182 "params": { 00:18:42.182 "period_us": 100000, 00:18:42.182 "enable": false 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_enable_histogram", 00:18:42.182 "params": { 00:18:42.182 "name": "nvme0n1", 00:18:42.182 "enable": true 00:18:42.182 } 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "method": "bdev_wait_for_examine" 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }, 00:18:42.182 { 00:18:42.182 "subsystem": "nbd", 00:18:42.182 "config": [] 00:18:42.182 } 00:18:42.182 ] 00:18:42.182 }' 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.182 23:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.183 [2024-07-15 23:21:57.457796] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:42.183 [2024-07-15 23:21:57.457882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366569 ] 00:18:42.183 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.440 [2024-07-15 23:21:57.519346] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.440 [2024-07-15 23:21:57.639117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.697 [2024-07-15 23:21:57.822229] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.261 23:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.261 23:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.261 23:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.261 23:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:43.519 23:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.519 23:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.519 Running I/O for 1 seconds... 00:18:44.892 00:18:44.892 Latency(us) 00:18:44.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.892 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:44.892 Verification LBA range: start 0x0 length 0x2000 00:18:44.892 nvme0n1 : 1.03 2914.51 11.38 0.00 0.00 43380.59 10971.21 73011.96 00:18:44.892 =================================================================================================================== 00:18:44.892 Total : 2914.51 11.38 0.00 0.00 43380.59 10971.21 73011.96 00:18:44.892 0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:44.892 nvmf_trace.0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2366569 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2366569 ']' 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2366569 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366569 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366569' 00:18:44.892 killing process with pid 2366569 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2366569 00:18:44.892 Received shutdown signal, test time was about 1.000000 seconds 00:18:44.892 00:18:44.892 Latency(us) 00:18:44.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.892 =================================================================================================================== 00:18:44.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.892 23:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2366569 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.151 rmmod nvme_tcp 00:18:45.151 rmmod nvme_fabrics 00:18:45.151 rmmod nvme_keyring 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2366466 ']' 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2366466 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2366466 ']' 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2366466 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366466 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366466' 00:18:45.151 killing process with pid 2366466 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2366466 00:18:45.151 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2366466 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.410 23:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.941 23:22:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.941 23:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QJntQf9IX5 /tmp/tmp.5K6AewGikH /tmp/tmp.81Rz786qYM 00:18:47.941 00:18:47.941 real 1m22.587s 00:18:47.941 user 2m9.647s 00:18:47.941 sys 0m29.358s 00:18:47.941 23:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.941 23:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.941 ************************************ 00:18:47.941 END TEST nvmf_tls 00:18:47.941 ************************************ 00:18:47.941 23:22:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.941 23:22:02 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:47.941 23:22:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.941 23:22:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.941 23:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.941 ************************************ 00:18:47.941 START TEST nvmf_fips 00:18:47.941 ************************************ 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:47.941 * Looking for test storage... 00:18:47.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.941 23:22:02 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:47.942 Error setting digest 00:18:47.942 00022C2B357F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:47.942 00022C2B357F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.942 23:22:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:49.834 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:49.835 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:49.835 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:49.835 Found net devices under 0000:84:00.0: cvl_0_0 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:49.835 Found net devices under 0000:84:00.1: cvl_0_1 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:49.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:18:49.835 00:18:49.835 --- 10.0.0.2 ping statistics --- 00:18:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.835 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:18:49.835 00:18:49.835 --- 10.0.0.1 ping statistics --- 00:18:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.835 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2368876 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2368876 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2368876 ']' 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.835 23:22:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:49.835 [2024-07-15 23:22:05.008194] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:49.835 [2024-07-15 23:22:05.008270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.835 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.835 [2024-07-15 23:22:05.076338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.092 [2024-07-15 23:22:05.191720] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.092 [2024-07-15 23:22:05.191800] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.092 [2024-07-15 23:22:05.191817] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.092 [2024-07-15 23:22:05.191830] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.092 [2024-07-15 23:22:05.191842] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.092 [2024-07-15 23:22:05.191875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.654 23:22:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.654 23:22:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:50.654 23:22:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.654 23:22:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.654 23:22:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.910 23:22:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.167 [2024-07-15 23:22:06.232835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.167 [2024-07-15 23:22:06.248819] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.167 [2024-07-15 23:22:06.249020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.167 [2024-07-15 23:22:06.280141] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.167 malloc0 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2369033 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2369033 /var/tmp/bdevperf.sock 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2369033 ']' 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.167 23:22:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.167 [2024-07-15 23:22:06.364125] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:18:51.167 [2024-07-15 23:22:06.364201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369033 ] 00:18:51.167 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.167 [2024-07-15 23:22:06.429126] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.424 [2024-07-15 23:22:06.545336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.989 23:22:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.989 23:22:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:51.989 23:22:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:52.554 [2024-07-15 23:22:07.569429] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.554 [2024-07-15 23:22:07.569567] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:52.554 TLSTESTn1 00:18:52.554 23:22:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.554 Running I/O for 10 seconds... 00:19:02.522 00:19:02.522 Latency(us) 00:19:02.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.522 Verification LBA range: start 0x0 length 0x2000 00:19:02.522 TLSTESTn1 : 10.03 2787.94 10.89 0.00 0.00 45823.46 7621.59 66798.17 00:19:02.522 =================================================================================================================== 00:19:02.522 Total : 2787.94 10.89 0.00 0.00 45823.46 7621.59 66798.17 00:19:02.522 0 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:02.522 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.782 nvmf_trace.0 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2369033 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2369033 ']' 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2369033 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369033 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369033' 00:19:02.782 killing process with pid 2369033 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2369033 00:19:02.782 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.782 00:19:02.782 Latency(us) 00:19:02.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.782 =================================================================================================================== 00:19:02.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.782 [2024-07-15 23:22:17.936086] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:02.782 23:22:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2369033 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.039 rmmod nvme_tcp 00:19:03.039 rmmod nvme_fabrics 00:19:03.039 rmmod nvme_keyring 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2368876 ']' 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2368876 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2368876 ']' 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2368876 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2368876 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2368876' 00:19:03.039 killing process with pid 2368876 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2368876 00:19:03.039 [2024-07-15 23:22:18.282228] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:03.039 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2368876 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.297 23:22:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.856 23:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.856 23:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:05.856 00:19:05.856 real 0m17.922s 00:19:05.856 user 0m21.723s 00:19:05.856 sys 0m7.786s 00:19:05.856 23:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.856 23:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.856 ************************************ 00:19:05.856 END TEST nvmf_fips 00:19:05.856 ************************************ 00:19:05.856 23:22:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:05.856 23:22:20 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:05.856 23:22:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:05.856 23:22:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:05.856 23:22:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:05.856 23:22:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.856 23:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:07.759 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:07.759 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.759 23:22:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:07.760 Found net devices under 0000:84:00.0: cvl_0_0 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:07.760 Found net devices under 0000:84:00.1: cvl_0_1 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:07.760 23:22:22 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:07.760 23:22:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:07.760 23:22:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.760 23:22:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.760 ************************************ 00:19:07.760 START TEST nvmf_perf_adq 00:19:07.760 ************************************ 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:07.760 * Looking for test storage... 00:19:07.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.760 23:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:09.689 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:09.689 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:09.689 Found net devices under 0000:84:00.0: cvl_0_0 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:09.689 Found net devices under 0000:84:00.1: cvl_0_1 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:09.689 23:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:10.254 23:22:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:12.156 23:22:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:17.421 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:17.421 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:17.421 Found net devices under 0000:84:00.0: cvl_0_0 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:17.421 Found net devices under 0000:84:00.1: cvl_0_1 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.421 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:19:17.422 00:19:17.422 --- 10.0.0.2 ping statistics --- 00:19:17.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.422 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:19:17.422 00:19:17.422 --- 10.0.0.1 ping statistics --- 00:19:17.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.422 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2375060 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2375060 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2375060 ']' 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.422 23:22:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:17.422 [2024-07-15 23:22:32.663146] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:19:17.422 [2024-07-15 23:22:32.663229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.422 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.422 [2024-07-15 23:22:32.731966] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.680 [2024-07-15 23:22:32.853839] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.680 [2024-07-15 23:22:32.853902] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.680 [2024-07-15 23:22:32.853919] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.680 [2024-07-15 23:22:32.853933] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.680 [2024-07-15 23:22:32.853944] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.680 [2024-07-15 23:22:32.854009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.680 [2024-07-15 23:22:32.854061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.680 [2024-07-15 23:22:32.854121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.680 [2024-07-15 23:22:32.854124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 [2024-07-15 23:22:33.824864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 Malloc1 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 [2024-07-15 23:22:33.878391] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2375218 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:18.610 23:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:18.610 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:21.139 "tick_rate": 2700000000, 00:19:21.139 "poll_groups": [ 00:19:21.139 { 00:19:21.139 "name": "nvmf_tgt_poll_group_000", 00:19:21.139 "admin_qpairs": 1, 00:19:21.139 "io_qpairs": 1, 00:19:21.139 "current_admin_qpairs": 1, 00:19:21.139 "current_io_qpairs": 1, 00:19:21.139 "pending_bdev_io": 0, 00:19:21.139 "completed_nvme_io": 20323, 00:19:21.139 "transports": [ 00:19:21.139 { 00:19:21.139 "trtype": "TCP" 00:19:21.139 } 00:19:21.139 ] 00:19:21.139 }, 00:19:21.139 { 00:19:21.139 "name": "nvmf_tgt_poll_group_001", 00:19:21.139 "admin_qpairs": 0, 00:19:21.139 "io_qpairs": 1, 00:19:21.139 "current_admin_qpairs": 0, 00:19:21.139 "current_io_qpairs": 1, 00:19:21.139 "pending_bdev_io": 0, 00:19:21.139 "completed_nvme_io": 20596, 00:19:21.139 "transports": [ 00:19:21.139 { 00:19:21.139 "trtype": "TCP" 00:19:21.139 } 00:19:21.139 ] 00:19:21.139 }, 00:19:21.139 { 00:19:21.139 "name": "nvmf_tgt_poll_group_002", 00:19:21.139 "admin_qpairs": 0, 00:19:21.139 "io_qpairs": 1, 00:19:21.139 "current_admin_qpairs": 0, 00:19:21.139 "current_io_qpairs": 1, 00:19:21.139 "pending_bdev_io": 0, 00:19:21.139 "completed_nvme_io": 20794, 00:19:21.139 "transports": [ 00:19:21.139 { 00:19:21.139 "trtype": "TCP" 00:19:21.139 } 00:19:21.139 ] 00:19:21.139 }, 00:19:21.139 { 00:19:21.139 "name": "nvmf_tgt_poll_group_003", 00:19:21.139 "admin_qpairs": 0, 00:19:21.139 "io_qpairs": 1, 00:19:21.139 "current_admin_qpairs": 0, 00:19:21.139 "current_io_qpairs": 1, 00:19:21.139 "pending_bdev_io": 0, 00:19:21.139 "completed_nvme_io": 20092, 00:19:21.139 "transports": [ 00:19:21.139 { 00:19:21.139 "trtype": "TCP" 00:19:21.139 } 00:19:21.139 ] 00:19:21.139 } 00:19:21.139 ] 00:19:21.139 }' 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:21.139 23:22:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2375218 00:19:29.251 Initializing NVMe Controllers 00:19:29.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:29.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:29.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:29.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:29.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:29.251 Initialization complete. Launching workers. 00:19:29.251 ======================================================== 00:19:29.251 Latency(us) 00:19:29.251 Device Information : IOPS MiB/s Average min max 00:19:29.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10439.80 40.78 6130.99 2256.34 9155.78 00:19:29.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10633.30 41.54 6018.34 2717.57 8977.99 00:19:29.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10739.70 41.95 5960.49 2165.37 8889.01 00:19:29.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10536.80 41.16 6073.85 2259.20 8906.49 00:19:29.251 ======================================================== 00:19:29.251 Total : 42349.60 165.43 6045.25 2165.37 9155.78 00:19:29.251 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.251 23:22:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.251 rmmod nvme_tcp 00:19:29.251 rmmod nvme_fabrics 00:19:29.251 rmmod nvme_keyring 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2375060 ']' 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2375060 ']' 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375060' 00:19:29.251 killing process with pid 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2375060 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.251 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.252 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.252 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.252 23:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.252 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.252 23:22:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.152 23:22:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.152 23:22:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:31.152 23:22:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:32.086 23:22:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:33.990 23:22:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.247 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:39.248 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:39.248 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:39.248 Found net devices under 0000:84:00.0: cvl_0_0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:39.248 Found net devices under 0000:84:00.1: cvl_0_1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:39.248 00:19:39.248 --- 10.0.0.2 ping statistics --- 00:19:39.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.248 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:19:39.248 00:19:39.248 --- 10.0.0.1 ping statistics --- 00:19:39.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.248 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:39.248 net.core.busy_poll = 1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:39.248 net.core.busy_read = 1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2377838 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2377838 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2377838 ']' 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.248 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.249 23:22:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.249 [2024-07-15 23:22:54.405445] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:19:39.249 [2024-07-15 23:22:54.405525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.249 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.249 [2024-07-15 23:22:54.474461] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.506 [2024-07-15 23:22:54.592272] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.506 [2024-07-15 23:22:54.592340] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.506 [2024-07-15 23:22:54.592357] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.506 [2024-07-15 23:22:54.592370] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.506 [2024-07-15 23:22:54.592381] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.506 [2024-07-15 23:22:54.592466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.506 [2024-07-15 23:22:54.592520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.506 [2024-07-15 23:22:54.592635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.506 [2024-07-15 23:22:54.592638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.071 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 [2024-07-15 23:22:55.533893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 Malloc1 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.329 [2024-07-15 23:22:55.587156] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2377996 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:40.329 23:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:40.329 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:42.859 "tick_rate": 2700000000, 00:19:42.859 "poll_groups": [ 00:19:42.859 { 00:19:42.859 "name": "nvmf_tgt_poll_group_000", 00:19:42.859 "admin_qpairs": 1, 00:19:42.859 "io_qpairs": 2, 00:19:42.859 "current_admin_qpairs": 1, 00:19:42.859 "current_io_qpairs": 2, 00:19:42.859 "pending_bdev_io": 0, 00:19:42.859 "completed_nvme_io": 25747, 00:19:42.859 "transports": [ 00:19:42.859 { 00:19:42.859 "trtype": "TCP" 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "nvmf_tgt_poll_group_001", 00:19:42.859 "admin_qpairs": 0, 00:19:42.859 "io_qpairs": 2, 00:19:42.859 "current_admin_qpairs": 0, 00:19:42.859 "current_io_qpairs": 2, 00:19:42.859 "pending_bdev_io": 0, 00:19:42.859 "completed_nvme_io": 25921, 00:19:42.859 "transports": [ 00:19:42.859 { 00:19:42.859 "trtype": "TCP" 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "nvmf_tgt_poll_group_002", 00:19:42.859 "admin_qpairs": 0, 00:19:42.859 "io_qpairs": 0, 00:19:42.859 "current_admin_qpairs": 0, 00:19:42.859 "current_io_qpairs": 0, 00:19:42.859 "pending_bdev_io": 0, 00:19:42.859 "completed_nvme_io": 0, 00:19:42.859 "transports": [ 00:19:42.859 { 00:19:42.859 "trtype": "TCP" 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "nvmf_tgt_poll_group_003", 00:19:42.859 "admin_qpairs": 0, 00:19:42.859 "io_qpairs": 0, 00:19:42.859 "current_admin_qpairs": 0, 00:19:42.859 "current_io_qpairs": 0, 00:19:42.859 "pending_bdev_io": 0, 00:19:42.859 "completed_nvme_io": 0, 00:19:42.859 "transports": [ 00:19:42.859 { 00:19:42.859 "trtype": "TCP" 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 }' 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:42.859 23:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2377996 00:19:51.029 Initializing NVMe Controllers 00:19:51.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:51.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:51.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:51.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:51.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:51.030 Initialization complete. Launching workers. 00:19:51.030 ======================================================== 00:19:51.030 Latency(us) 00:19:51.030 Device Information : IOPS MiB/s Average min max 00:19:51.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7013.09 27.39 9156.36 1909.02 55606.76 00:19:51.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6658.59 26.01 9611.57 1911.15 54502.19 00:19:51.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6410.49 25.04 10016.20 1924.01 54969.39 00:19:51.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7071.69 27.62 9051.74 1897.34 54781.21 00:19:51.030 ======================================================== 00:19:51.030 Total : 27153.85 106.07 9443.73 1897.34 55606.76 00:19:51.030 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.030 rmmod nvme_tcp 00:19:51.030 rmmod nvme_fabrics 00:19:51.030 rmmod nvme_keyring 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2377838 ']' 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2377838 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2377838 ']' 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2377838 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2377838 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2377838' 00:19:51.030 killing process with pid 2377838 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2377838 00:19:51.030 23:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2377838 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.030 23:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.305 23:23:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.305 23:23:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:54.305 00:19:54.305 real 0m46.484s 00:19:54.305 user 2m46.277s 00:19:54.305 sys 0m9.838s 00:19:54.305 23:23:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.305 23:23:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.305 ************************************ 00:19:54.305 END TEST nvmf_perf_adq 00:19:54.305 ************************************ 00:19:54.305 23:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:54.305 23:23:09 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:54.305 23:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:54.305 23:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.305 23:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.305 ************************************ 00:19:54.305 START TEST nvmf_shutdown 00:19:54.305 ************************************ 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:54.305 * Looking for test storage... 00:19:54.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.305 23:23:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 ************************************ 00:19:54.306 START TEST nvmf_shutdown_tc1 00:19:54.306 ************************************ 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.306 23:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:56.204 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:56.205 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:56.205 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:56.205 Found net devices under 0000:84:00.0: cvl_0_0 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:56.205 Found net devices under 0000:84:00.1: cvl_0_1 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.205 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:56.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:19:56.206 00:19:56.206 --- 10.0.0.2 ping statistics --- 00:19:56.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.206 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:19:56.206 00:19:56.206 --- 10.0.0.1 ping statistics --- 00:19:56.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.206 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2381305 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2381305 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2381305 ']' 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.206 23:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:56.206 [2024-07-15 23:23:11.490862] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:19:56.206 [2024-07-15 23:23:11.490937] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.464 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.464 [2024-07-15 23:23:11.560219] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.464 [2024-07-15 23:23:11.677897] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.464 [2024-07-15 23:23:11.677959] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.464 [2024-07-15 23:23:11.677975] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.464 [2024-07-15 23:23:11.677988] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.464 [2024-07-15 23:23:11.677999] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.464 [2024-07-15 23:23:11.678106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.464 [2024-07-15 23:23:11.678202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.464 [2024-07-15 23:23:11.678231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.464 [2024-07-15 23:23:11.678234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 [2024-07-15 23:23:12.444750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.398 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.399 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.399 Malloc1 00:19:57.399 [2024-07-15 23:23:12.519556] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.399 Malloc2 00:19:57.399 Malloc3 00:19:57.399 Malloc4 00:19:57.399 Malloc5 00:19:57.657 Malloc6 00:19:57.657 Malloc7 00:19:57.657 Malloc8 00:19:57.657 Malloc9 00:19:57.657 Malloc10 00:19:57.657 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.657 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:57.657 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.657 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2381490 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2381490 /var/tmp/bdevperf.sock 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2381490 ']' 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.915 "hdgst": ${hdgst:-false}, 00:19:57.915 "ddgst": ${ddgst:-false} 00:19:57.915 }, 00:19:57.915 "method": "bdev_nvme_attach_controller" 00:19:57.915 } 00:19:57.915 EOF 00:19:57.915 )") 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.915 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.915 { 00:19:57.915 "params": { 00:19:57.915 "name": "Nvme$subsystem", 00:19:57.915 "trtype": "$TEST_TRANSPORT", 00:19:57.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.915 "adrfam": "ipv4", 00:19:57.915 "trsvcid": "$NVMF_PORT", 00:19:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.916 "hdgst": ${hdgst:-false}, 00:19:57.916 "ddgst": ${ddgst:-false} 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 } 00:19:57.916 EOF 00:19:57.916 )") 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.916 { 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme$subsystem", 00:19:57.916 "trtype": "$TEST_TRANSPORT", 00:19:57.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "$NVMF_PORT", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.916 "hdgst": ${hdgst:-false}, 00:19:57.916 "ddgst": ${ddgst:-false} 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 } 00:19:57.916 EOF 00:19:57.916 )") 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.916 { 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme$subsystem", 00:19:57.916 "trtype": "$TEST_TRANSPORT", 00:19:57.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "$NVMF_PORT", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.916 "hdgst": ${hdgst:-false}, 00:19:57.916 "ddgst": ${ddgst:-false} 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 } 00:19:57.916 EOF 00:19:57.916 )") 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:57.916 23:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme1", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme2", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme3", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme4", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme5", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme6", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme7", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme8", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme9", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 },{ 00:19:57.916 "params": { 00:19:57.916 "name": "Nvme10", 00:19:57.916 "trtype": "tcp", 00:19:57.916 "traddr": "10.0.0.2", 00:19:57.916 "adrfam": "ipv4", 00:19:57.916 "trsvcid": "4420", 00:19:57.916 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:57.916 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:57.916 "hdgst": false, 00:19:57.916 "ddgst": false 00:19:57.916 }, 00:19:57.916 "method": "bdev_nvme_attach_controller" 00:19:57.916 }' 00:19:57.916 [2024-07-15 23:23:13.027340] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:19:57.916 [2024-07-15 23:23:13.027412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:57.916 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.916 [2024-07-15 23:23:13.091054] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.916 [2024-07-15 23:23:13.201938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2381490 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:59.813 23:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:00.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2381490 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2381305 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.746 { 00:20:00.746 "params": { 00:20:00.746 "name": "Nvme$subsystem", 00:20:00.746 "trtype": "$TEST_TRANSPORT", 00:20:00.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.746 "adrfam": "ipv4", 00:20:00.746 "trsvcid": "$NVMF_PORT", 00:20:00.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.746 "hdgst": ${hdgst:-false}, 00:20:00.746 "ddgst": ${ddgst:-false} 00:20:00.746 }, 00:20:00.746 "method": "bdev_nvme_attach_controller" 00:20:00.746 } 00:20:00.746 EOF 00:20:00.746 )") 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.746 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.746 { 00:20:00.746 "params": { 00:20:00.746 "name": "Nvme$subsystem", 00:20:00.746 "trtype": "$TEST_TRANSPORT", 00:20:00.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.746 "adrfam": "ipv4", 00:20:00.746 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.747 { 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme$subsystem", 00:20:00.747 "trtype": "$TEST_TRANSPORT", 00:20:00.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "$NVMF_PORT", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.747 "hdgst": ${hdgst:-false}, 00:20:00.747 "ddgst": ${ddgst:-false} 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 } 00:20:00.747 EOF 00:20:00.747 )") 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:00.747 23:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme1", 00:20:00.747 "trtype": "tcp", 00:20:00.747 "traddr": "10.0.0.2", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "4420", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.747 "hdgst": false, 00:20:00.747 "ddgst": false 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 },{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme2", 00:20:00.747 "trtype": "tcp", 00:20:00.747 "traddr": "10.0.0.2", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "4420", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.747 "hdgst": false, 00:20:00.747 "ddgst": false 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 },{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme3", 00:20:00.747 "trtype": "tcp", 00:20:00.747 "traddr": "10.0.0.2", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "4420", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:00.747 "hdgst": false, 00:20:00.747 "ddgst": false 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 },{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme4", 00:20:00.747 "trtype": "tcp", 00:20:00.747 "traddr": "10.0.0.2", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "4420", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:00.747 "hdgst": false, 00:20:00.747 "ddgst": false 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 },{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme5", 00:20:00.747 "trtype": "tcp", 00:20:00.747 "traddr": "10.0.0.2", 00:20:00.747 "adrfam": "ipv4", 00:20:00.747 "trsvcid": "4420", 00:20:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:00.747 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:00.747 "hdgst": false, 00:20:00.747 "ddgst": false 00:20:00.747 }, 00:20:00.747 "method": "bdev_nvme_attach_controller" 00:20:00.747 },{ 00:20:00.747 "params": { 00:20:00.747 "name": "Nvme6", 00:20:00.747 "trtype": "tcp", 00:20:00.748 "traddr": "10.0.0.2", 00:20:00.748 "adrfam": "ipv4", 00:20:00.748 "trsvcid": "4420", 00:20:00.748 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:00.748 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:00.748 "hdgst": false, 00:20:00.748 "ddgst": false 00:20:00.748 }, 00:20:00.748 "method": "bdev_nvme_attach_controller" 00:20:00.748 },{ 00:20:00.748 "params": { 00:20:00.748 "name": "Nvme7", 00:20:00.748 "trtype": "tcp", 00:20:00.748 "traddr": "10.0.0.2", 00:20:00.748 "adrfam": "ipv4", 00:20:00.748 "trsvcid": "4420", 00:20:00.748 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:00.748 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:00.748 "hdgst": false, 00:20:00.748 "ddgst": false 00:20:00.748 }, 00:20:00.748 "method": "bdev_nvme_attach_controller" 00:20:00.748 },{ 00:20:00.748 "params": { 00:20:00.748 "name": "Nvme8", 00:20:00.748 "trtype": "tcp", 00:20:00.748 "traddr": "10.0.0.2", 00:20:00.748 "adrfam": "ipv4", 00:20:00.748 "trsvcid": "4420", 00:20:00.748 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:00.748 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:00.748 "hdgst": false, 00:20:00.748 "ddgst": false 00:20:00.748 }, 00:20:00.748 "method": "bdev_nvme_attach_controller" 00:20:00.748 },{ 00:20:00.748 "params": { 00:20:00.748 "name": "Nvme9", 00:20:00.748 "trtype": "tcp", 00:20:00.748 "traddr": "10.0.0.2", 00:20:00.748 "adrfam": "ipv4", 00:20:00.748 "trsvcid": "4420", 00:20:00.748 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:00.748 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:00.748 "hdgst": false, 00:20:00.748 "ddgst": false 00:20:00.748 }, 00:20:00.748 "method": "bdev_nvme_attach_controller" 00:20:00.748 },{ 00:20:00.748 "params": { 00:20:00.748 "name": "Nvme10", 00:20:00.748 "trtype": "tcp", 00:20:00.748 "traddr": "10.0.0.2", 00:20:00.748 "adrfam": "ipv4", 00:20:00.748 "trsvcid": "4420", 00:20:00.748 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:00.748 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:00.748 "hdgst": false, 00:20:00.748 "ddgst": false 00:20:00.748 }, 00:20:00.748 "method": "bdev_nvme_attach_controller" 00:20:00.748 }' 00:20:00.748 [2024-07-15 23:23:16.045392] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:00.748 [2024-07-15 23:23:16.045465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381911 ] 00:20:01.005 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.005 [2024-07-15 23:23:16.110090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.005 [2024-07-15 23:23:16.221422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.903 Running I/O for 1 seconds... 00:20:03.837 00:20:03.837 Latency(us) 00:20:03.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.837 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme1n1 : 1.13 231.40 14.46 0.00 0.00 272473.06 7378.87 264085.81 00:20:03.837 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme2n1 : 1.14 227.91 14.24 0.00 0.00 272838.32 8495.41 246997.90 00:20:03.837 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme3n1 : 1.11 230.38 14.40 0.00 0.00 265445.64 17185.00 267192.70 00:20:03.837 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme4n1 : 1.12 232.00 14.50 0.00 0.00 258156.01 4927.34 262532.36 00:20:03.837 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme5n1 : 1.15 223.21 13.95 0.00 0.00 265423.83 19515.16 271853.04 00:20:03.837 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme6n1 : 1.19 215.64 13.48 0.00 0.00 270852.17 25049.32 306028.85 00:20:03.837 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme7n1 : 1.14 225.35 14.08 0.00 0.00 253613.13 19903.53 270299.59 00:20:03.837 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme8n1 : 1.16 221.60 13.85 0.00 0.00 253788.16 20777.34 278066.82 00:20:03.837 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme9n1 : 1.16 220.86 13.80 0.00 0.00 250302.77 21068.61 267192.70 00:20:03.837 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.837 Verification LBA range: start 0x0 length 0x400 00:20:03.837 Nvme10n1 : 1.21 265.44 16.59 0.00 0.00 205927.01 7767.23 276513.37 00:20:03.837 =================================================================================================================== 00:20:03.837 Total : 2293.79 143.36 0.00 0.00 255701.03 4927.34 306028.85 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.095 rmmod nvme_tcp 00:20:04.095 rmmod nvme_fabrics 00:20:04.095 rmmod nvme_keyring 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2381305 ']' 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2381305 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2381305 ']' 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2381305 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2381305 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2381305' 00:20:04.095 killing process with pid 2381305 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2381305 00:20:04.095 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2381305 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.663 23:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.565 00:20:06.565 real 0m12.482s 00:20:06.565 user 0m37.231s 00:20:06.565 sys 0m3.251s 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.565 ************************************ 00:20:06.565 END TEST nvmf_shutdown_tc1 00:20:06.565 ************************************ 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.565 23:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:06.825 ************************************ 00:20:06.825 START TEST nvmf_shutdown_tc2 00:20:06.825 ************************************ 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.825 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:06.826 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:06.826 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:06.826 Found net devices under 0000:84:00.0: cvl_0_0 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:06.826 Found net devices under 0000:84:00.1: cvl_0_1 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.826 23:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:20:06.826 00:20:06.826 --- 10.0.0.2 ping statistics --- 00:20:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.826 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:06.826 00:20:06.826 --- 10.0.0.1 ping statistics --- 00:20:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.826 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2382671 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2382671 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2382671 ']' 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.826 23:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.826 [2024-07-15 23:23:22.110104] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:06.826 [2024-07-15 23:23:22.110189] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.085 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.085 [2024-07-15 23:23:22.179809] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.085 [2024-07-15 23:23:22.300415] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.085 [2024-07-15 23:23:22.300472] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.085 [2024-07-15 23:23:22.300488] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.085 [2024-07-15 23:23:22.300502] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.085 [2024-07-15 23:23:22.300514] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.085 [2024-07-15 23:23:22.300616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.085 [2024-07-15 23:23:22.300718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.085 [2024-07-15 23:23:22.300796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.085 [2024-07-15 23:23:22.300792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.017 [2024-07-15 23:23:23.066686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.017 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.018 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 Malloc1 00:20:08.018 [2024-07-15 23:23:23.151868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.018 Malloc2 00:20:08.018 Malloc3 00:20:08.018 Malloc4 00:20:08.018 Malloc5 00:20:08.275 Malloc6 00:20:08.275 Malloc7 00:20:08.275 Malloc8 00:20:08.275 Malloc9 00:20:08.275 Malloc10 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2382984 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2382984 /var/tmp/bdevperf.sock 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2382984 ']' 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.532 { 00:20:08.532 "params": { 00:20:08.532 "name": "Nvme$subsystem", 00:20:08.532 "trtype": "$TEST_TRANSPORT", 00:20:08.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.532 "adrfam": "ipv4", 00:20:08.532 "trsvcid": "$NVMF_PORT", 00:20:08.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.532 "hdgst": ${hdgst:-false}, 00:20:08.532 "ddgst": ${ddgst:-false} 00:20:08.532 }, 00:20:08.532 "method": "bdev_nvme_attach_controller" 00:20:08.532 } 00:20:08.532 EOF 00:20:08.532 )") 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.532 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.532 { 00:20:08.532 "params": { 00:20:08.532 "name": "Nvme$subsystem", 00:20:08.532 "trtype": "$TEST_TRANSPORT", 00:20:08.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.532 "adrfam": "ipv4", 00:20:08.532 "trsvcid": "$NVMF_PORT", 00:20:08.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.533 { 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme$subsystem", 00:20:08.533 "trtype": "$TEST_TRANSPORT", 00:20:08.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "$NVMF_PORT", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.533 "hdgst": ${hdgst:-false}, 00:20:08.533 "ddgst": ${ddgst:-false} 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 } 00:20:08.533 EOF 00:20:08.533 )") 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:08.533 23:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme1", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme2", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme3", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme4", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme5", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme6", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.533 "adrfam": "ipv4", 00:20:08.533 "trsvcid": "4420", 00:20:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.533 "hdgst": false, 00:20:08.533 "ddgst": false 00:20:08.533 }, 00:20:08.533 "method": "bdev_nvme_attach_controller" 00:20:08.533 },{ 00:20:08.533 "params": { 00:20:08.533 "name": "Nvme7", 00:20:08.533 "trtype": "tcp", 00:20:08.533 "traddr": "10.0.0.2", 00:20:08.534 "adrfam": "ipv4", 00:20:08.534 "trsvcid": "4420", 00:20:08.534 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.534 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.534 "hdgst": false, 00:20:08.534 "ddgst": false 00:20:08.534 }, 00:20:08.534 "method": "bdev_nvme_attach_controller" 00:20:08.534 },{ 00:20:08.534 "params": { 00:20:08.534 "name": "Nvme8", 00:20:08.534 "trtype": "tcp", 00:20:08.534 "traddr": "10.0.0.2", 00:20:08.534 "adrfam": "ipv4", 00:20:08.534 "trsvcid": "4420", 00:20:08.534 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.534 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.534 "hdgst": false, 00:20:08.534 "ddgst": false 00:20:08.534 }, 00:20:08.534 "method": "bdev_nvme_attach_controller" 00:20:08.534 },{ 00:20:08.534 "params": { 00:20:08.534 "name": "Nvme9", 00:20:08.534 "trtype": "tcp", 00:20:08.534 "traddr": "10.0.0.2", 00:20:08.534 "adrfam": "ipv4", 00:20:08.534 "trsvcid": "4420", 00:20:08.534 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.534 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.534 "hdgst": false, 00:20:08.534 "ddgst": false 00:20:08.534 }, 00:20:08.534 "method": "bdev_nvme_attach_controller" 00:20:08.534 },{ 00:20:08.534 "params": { 00:20:08.534 "name": "Nvme10", 00:20:08.534 "trtype": "tcp", 00:20:08.534 "traddr": "10.0.0.2", 00:20:08.534 "adrfam": "ipv4", 00:20:08.534 "trsvcid": "4420", 00:20:08.534 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.534 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.534 "hdgst": false, 00:20:08.534 "ddgst": false 00:20:08.534 }, 00:20:08.534 "method": "bdev_nvme_attach_controller" 00:20:08.534 }' 00:20:08.534 [2024-07-15 23:23:23.656706] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:08.534 [2024-07-15 23:23:23.656827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382984 ] 00:20:08.534 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.534 [2024-07-15 23:23:23.719973] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.534 [2024-07-15 23:23:23.830361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.904 Running I/O for 10 seconds... 00:20:09.904 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.904 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:09.904 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:09.904 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.904 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.161 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.161 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:10.161 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:10.161 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:10.162 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:10.420 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2382984 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2382984 ']' 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2382984 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382984 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382984' 00:20:10.678 killing process with pid 2382984 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2382984 00:20:10.678 23:23:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2382984 00:20:10.937 Received shutdown signal, test time was about 0.929225 seconds 00:20:10.937 00:20:10.937 Latency(us) 00:20:10.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme1n1 : 0.91 211.89 13.24 0.00 0.00 298499.29 21359.88 267192.70 00:20:10.937 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme2n1 : 0.89 219.72 13.73 0.00 0.00 280464.48 4053.52 256318.58 00:20:10.937 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme3n1 : 0.93 275.74 17.23 0.00 0.00 220168.72 16505.36 265639.25 00:20:10.937 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme4n1 : 0.92 276.77 17.30 0.00 0.00 214753.85 20777.34 250104.79 00:20:10.937 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme5n1 : 0.90 228.69 14.29 0.00 0.00 248501.38 15728.64 254765.13 00:20:10.937 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme6n1 : 0.91 210.77 13.17 0.00 0.00 269981.01 20291.89 267192.70 00:20:10.937 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme7n1 : 0.90 218.86 13.68 0.00 0.00 252301.46 5339.97 257872.02 00:20:10.937 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme8n1 : 0.88 217.07 13.57 0.00 0.00 249198.36 23107.51 259425.47 00:20:10.937 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme9n1 : 0.92 213.11 13.32 0.00 0.00 248721.63 4369.07 271853.04 00:20:10.937 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.937 Verification LBA range: start 0x0 length 0x400 00:20:10.937 Nvme10n1 : 0.92 209.01 13.06 0.00 0.00 249216.82 20097.71 290494.39 00:20:10.937 =================================================================================================================== 00:20:10.937 Total : 2281.62 142.60 0.00 0.00 250988.80 4053.52 290494.39 00:20:11.195 23:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.129 rmmod nvme_tcp 00:20:12.129 rmmod nvme_fabrics 00:20:12.129 rmmod nvme_keyring 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2382671 ']' 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2382671 ']' 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382671' 00:20:12.129 killing process with pid 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2382671 00:20:12.129 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2382671 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.695 23:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.229 23:23:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.229 00:20:15.229 real 0m8.093s 00:20:15.229 user 0m24.818s 00:20:15.229 sys 0m1.483s 00:20:15.229 23:23:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.229 23:23:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.229 ************************************ 00:20:15.229 END TEST nvmf_shutdown_tc2 00:20:15.229 ************************************ 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:15.229 ************************************ 00:20:15.229 START TEST nvmf_shutdown_tc3 00:20:15.229 ************************************ 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.229 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:15.230 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:15.230 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:15.230 Found net devices under 0000:84:00.0: cvl_0_0 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:15.230 Found net devices under 0000:84:00.1: cvl_0_1 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:15.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:20:15.230 00:20:15.230 --- 10.0.0.2 ping statistics --- 00:20:15.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.230 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:20:15.230 00:20:15.230 --- 10.0.0.1 ping statistics --- 00:20:15.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.230 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2383775 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2383775 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2383775 ']' 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.230 23:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.230 [2024-07-15 23:23:30.248904] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:15.230 [2024-07-15 23:23:30.248984] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.230 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.230 [2024-07-15 23:23:30.317707] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.230 [2024-07-15 23:23:30.429830] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.230 [2024-07-15 23:23:30.429906] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.230 [2024-07-15 23:23:30.429936] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.230 [2024-07-15 23:23:30.429948] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.230 [2024-07-15 23:23:30.429958] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.231 [2024-07-15 23:23:30.430050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.231 [2024-07-15 23:23:30.430112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.231 [2024-07-15 23:23:30.430145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:15.231 [2024-07-15 23:23:30.430147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.163 [2024-07-15 23:23:31.277992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.163 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.163 Malloc1 00:20:16.163 [2024-07-15 23:23:31.353415] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.163 Malloc2 00:20:16.163 Malloc3 00:20:16.163 Malloc4 00:20:16.420 Malloc5 00:20:16.420 Malloc6 00:20:16.420 Malloc7 00:20:16.420 Malloc8 00:20:16.420 Malloc9 00:20:16.708 Malloc10 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2384077 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2384077 /var/tmp/bdevperf.sock 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2384077 ']' 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.708 { 00:20:16.708 "params": { 00:20:16.708 "name": "Nvme$subsystem", 00:20:16.708 "trtype": "$TEST_TRANSPORT", 00:20:16.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.708 "adrfam": "ipv4", 00:20:16.708 "trsvcid": "$NVMF_PORT", 00:20:16.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.708 "hdgst": ${hdgst:-false}, 00:20:16.708 "ddgst": ${ddgst:-false} 00:20:16.708 }, 00:20:16.708 "method": "bdev_nvme_attach_controller" 00:20:16.708 } 00:20:16.708 EOF 00:20:16.708 )") 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.708 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.708 { 00:20:16.708 "params": { 00:20:16.708 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.709 { 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme$subsystem", 00:20:16.709 "trtype": "$TEST_TRANSPORT", 00:20:16.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "$NVMF_PORT", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.709 "hdgst": ${hdgst:-false}, 00:20:16.709 "ddgst": ${ddgst:-false} 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 } 00:20:16.709 EOF 00:20:16.709 )") 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:16.709 23:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme1", 00:20:16.709 "trtype": "tcp", 00:20:16.709 "traddr": "10.0.0.2", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "4420", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.709 "hdgst": false, 00:20:16.709 "ddgst": false 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 },{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme2", 00:20:16.709 "trtype": "tcp", 00:20:16.709 "traddr": "10.0.0.2", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "4420", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:16.709 "hdgst": false, 00:20:16.709 "ddgst": false 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 },{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme3", 00:20:16.709 "trtype": "tcp", 00:20:16.709 "traddr": "10.0.0.2", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "4420", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:16.709 "hdgst": false, 00:20:16.709 "ddgst": false 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 },{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme4", 00:20:16.709 "trtype": "tcp", 00:20:16.709 "traddr": "10.0.0.2", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "4420", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:16.709 "hdgst": false, 00:20:16.709 "ddgst": false 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 },{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme5", 00:20:16.709 "trtype": "tcp", 00:20:16.709 "traddr": "10.0.0.2", 00:20:16.709 "adrfam": "ipv4", 00:20:16.709 "trsvcid": "4420", 00:20:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:16.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:16.709 "hdgst": false, 00:20:16.709 "ddgst": false 00:20:16.709 }, 00:20:16.709 "method": "bdev_nvme_attach_controller" 00:20:16.709 },{ 00:20:16.709 "params": { 00:20:16.709 "name": "Nvme6", 00:20:16.710 "trtype": "tcp", 00:20:16.710 "traddr": "10.0.0.2", 00:20:16.710 "adrfam": "ipv4", 00:20:16.710 "trsvcid": "4420", 00:20:16.710 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:16.710 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:16.710 "hdgst": false, 00:20:16.710 "ddgst": false 00:20:16.710 }, 00:20:16.710 "method": "bdev_nvme_attach_controller" 00:20:16.710 },{ 00:20:16.710 "params": { 00:20:16.710 "name": "Nvme7", 00:20:16.710 "trtype": "tcp", 00:20:16.710 "traddr": "10.0.0.2", 00:20:16.710 "adrfam": "ipv4", 00:20:16.710 "trsvcid": "4420", 00:20:16.710 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:16.710 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:16.710 "hdgst": false, 00:20:16.710 "ddgst": false 00:20:16.710 }, 00:20:16.710 "method": "bdev_nvme_attach_controller" 00:20:16.710 },{ 00:20:16.710 "params": { 00:20:16.710 "name": "Nvme8", 00:20:16.710 "trtype": "tcp", 00:20:16.710 "traddr": "10.0.0.2", 00:20:16.710 "adrfam": "ipv4", 00:20:16.710 "trsvcid": "4420", 00:20:16.710 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:16.710 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:16.710 "hdgst": false, 00:20:16.710 "ddgst": false 00:20:16.710 }, 00:20:16.710 "method": "bdev_nvme_attach_controller" 00:20:16.710 },{ 00:20:16.710 "params": { 00:20:16.710 "name": "Nvme9", 00:20:16.710 "trtype": "tcp", 00:20:16.710 "traddr": "10.0.0.2", 00:20:16.710 "adrfam": "ipv4", 00:20:16.710 "trsvcid": "4420", 00:20:16.710 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:16.710 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:16.710 "hdgst": false, 00:20:16.710 "ddgst": false 00:20:16.710 }, 00:20:16.710 "method": "bdev_nvme_attach_controller" 00:20:16.710 },{ 00:20:16.710 "params": { 00:20:16.710 "name": "Nvme10", 00:20:16.710 "trtype": "tcp", 00:20:16.710 "traddr": "10.0.0.2", 00:20:16.710 "adrfam": "ipv4", 00:20:16.710 "trsvcid": "4420", 00:20:16.710 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:16.710 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:16.710 "hdgst": false, 00:20:16.710 "ddgst": false 00:20:16.710 }, 00:20:16.710 "method": "bdev_nvme_attach_controller" 00:20:16.710 }' 00:20:16.710 [2024-07-15 23:23:31.844820] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:16.710 [2024-07-15 23:23:31.844898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384077 ] 00:20:16.710 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.710 [2024-07-15 23:23:31.909080] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.993 [2024-07-15 23:23:32.020105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.889 Running I/O for 10 seconds... 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:19.456 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:19.725 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:19.725 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:19.725 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:19.725 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:19.725 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2383775 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2383775 ']' 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2383775 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383775 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383775' 00:20:19.726 killing process with pid 2383775 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2383775 00:20:19.726 23:23:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2383775 00:20:19.726 [2024-07-15 23:23:34.971837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.971960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.971977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.971990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.972715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec2c0 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.726 [2024-07-15 23:23:34.974371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.974996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.975008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.975020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eed70 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.727 [2024-07-15 23:23:34.976952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.976965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.976977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.976990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.977279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ec7a0 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.980984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed660 is same with the state(5) to be set 00:20:19.728 [2024-07-15 23:23:34.982099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20edb40 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.983979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.984526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee020 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.985988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.729 [2024-07-15 23:23:34.986156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.986578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.730 [2024-07-15 23:23:34.987811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.987988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:34.988156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eea00 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.004046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85520 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.004330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda15c0 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.004503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91040 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.004669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4d50 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.004850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.004967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2610 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.005013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd001a0 is same with the state(5) to be set 00:20:19.731 [2024-07-15 23:23:35.005174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.731 [2024-07-15 23:23:35.005270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.731 [2024-07-15 23:23:35.005284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97a20 is same with the state(5) to be set 00:20:19.732 [2024-07-15 23:23:35.005344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfca40 is same with the state(5) to be set 00:20:19.732 [2024-07-15 23:23:35.005511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9cac0 is same with the state(5) to be set 00:20:19.732 [2024-07-15 23:23:35.005690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.732 [2024-07-15 23:23:35.005815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.005827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0d90 is same with the state(5) to be set 00:20:19.732 [2024-07-15 23:23:35.006973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.732 [2024-07-15 23:23:35.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.732 [2024-07-15 23:23:35.007886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.007902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.007916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.007932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.007945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.007961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.007974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.007990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.008950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.008965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:19.733 [2024-07-15 23:23:35.009102] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe21260 was disconnected and freed. reset controller. 00:20:19.733 [2024-07-15 23:23:35.009164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.733 [2024-07-15 23:23:35.009363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.733 [2024-07-15 23:23:35.009377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.009983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.009996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.734 [2024-07-15 23:23:35.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.734 [2024-07-15 23:23:35.010470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.010977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.010993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccafb0 is same with the state(5) to be set 00:20:19.735 [2024-07-15 23:23:35.011233] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xccafb0 was disconnected and freed. reset controller. 00:20:19.735 [2024-07-15 23:23:35.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.011983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.011997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.012020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.012062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.735 [2024-07-15 23:23:35.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.735 [2024-07-15 23:23:35.012093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.012981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.012994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.736 [2024-07-15 23:23:35.013421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.736 [2024-07-15 23:23:35.013438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.013452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.013466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1457180 is same with the state(5) to be set 00:20:19.737 [2024-07-15 23:23:35.013537] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1457180 was disconnected and freed. reset controller. 00:20:19.737 [2024-07-15 23:23:35.017888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:19.737 [2024-07-15 23:23:35.017937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:19.737 [2024-07-15 23:23:35.017970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4d50 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.017994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfca40 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85520 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda15c0 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91040 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2610 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd001a0 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97a20 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9cac0 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0d90 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.018842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:19.737 [2024-07-15 23:23:35.019656] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.020023] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.020109] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.020189] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.020263] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.020436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.737 [2024-07-15 23:23:35.020466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfca40 with addr=10.0.0.2, port=4420 00:20:19.737 [2024-07-15 23:23:35.020484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfca40 is same with the state(5) to be set 00:20:19.737 [2024-07-15 23:23:35.020621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.737 [2024-07-15 23:23:35.020646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf4d50 with addr=10.0.0.2, port=4420 00:20:19.737 [2024-07-15 23:23:35.020663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4d50 is same with the state(5) to be set 00:20:19.737 [2024-07-15 23:23:35.020805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.737 [2024-07-15 23:23:35.020831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d2610 with addr=10.0.0.2, port=4420 00:20:19.737 [2024-07-15 23:23:35.020854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2610 is same with the state(5) to be set 00:20:19.737 [2024-07-15 23:23:35.020931] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.021003] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:19.737 [2024-07-15 23:23:35.021151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfca40 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.021179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4d50 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.021197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2610 (9): Bad file descriptor 00:20:19.737 [2024-07-15 23:23:35.021297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:19.737 [2024-07-15 23:23:35.021317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:19.737 [2024-07-15 23:23:35.021334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:19.737 [2024-07-15 23:23:35.021354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:19.737 [2024-07-15 23:23:35.021368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:19.737 [2024-07-15 23:23:35.021381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:19.737 [2024-07-15 23:23:35.021404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:19.737 [2024-07-15 23:23:35.021418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:19.737 [2024-07-15 23:23:35.021431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:19.737 [2024-07-15 23:23:35.021487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.737 [2024-07-15 23:23:35.021506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.737 [2024-07-15 23:23:35.021518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.737 [2024-07-15 23:23:35.028134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.737 [2024-07-15 23:23:35.028863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.737 [2024-07-15 23:23:35.028879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.028893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.028927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.028943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.028957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.028973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.028988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.738 [2024-07-15 23:23:35.029540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.738 [2024-07-15 23:23:35.029554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.029984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.029999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.030038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.030072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.030103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.030132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.030161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.030177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd79310 is same with the state(5) to be set 00:20:20.032 [2024-07-15 23:23:35.031492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.031981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.031997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.032010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.032 [2024-07-15 23:23:35.032028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.032 [2024-07-15 23:23:35.032041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.032972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.032986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.033445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.033460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1ff20 is same with the state(5) to be set 00:20:20.033 [2024-07-15 23:23:35.034713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.034984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.034997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.033 [2024-07-15 23:23:35.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.033 [2024-07-15 23:23:35.035303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.035977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.035991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.036654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.036668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccc4c0 is same with the state(5) to be set 00:20:20.034 [2024-07-15 23:23:35.037937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.037960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.037981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.037997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.034 [2024-07-15 23:23:35.038527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.034 [2024-07-15 23:23:35.038540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.038986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.039897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.039913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15feb90 is same with the state(5) to be set 00:20:20.035 [2024-07-15 23:23:35.041167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.035 [2024-07-15 23:23:35.041603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.035 [2024-07-15 23:23:35.041619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.041971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.041985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.042666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.042680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.051796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.051853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.051872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.051886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.051905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.051919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.051936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.051950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.051978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.051993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.052290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.052306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a6710 is same with the state(5) to be set 00:20:20.036 [2024-07-15 23:23:35.053676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.053980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.053996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.036 [2024-07-15 23:23:35.054199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.036 [2024-07-15 23:23:35.054215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.054972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.054985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.055646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.055664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd71380 is same with the state(5) to be set 00:20:20.037 [2024-07-15 23:23:35.057941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.057966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.057988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.037 [2024-07-15 23:23:35.058326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.037 [2024-07-15 23:23:35.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.058977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.058994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.038 [2024-07-15 23:23:35.059920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.038 [2024-07-15 23:23:35.059934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72880 is same with the state(5) to be set 00:20:20.038 [2024-07-15 23:23:35.062072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.038 [2024-07-15 23:23:35.062106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:20.038 [2024-07-15 23:23:35.062125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:20.038 [2024-07-15 23:23:35.062143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:20.038 [2024-07-15 23:23:35.062260] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.038 [2024-07-15 23:23:35.062288] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.038 [2024-07-15 23:23:35.062309] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.038 [2024-07-15 23:23:35.062421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:20.038 [2024-07-15 23:23:35.062446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:20.038 task offset: 30976 on job bdev=Nvme3n1 fails 00:20:20.038 00:20:20.038 Latency(us) 00:20:20.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme1n1 ended in about 1.02 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme1n1 : 1.02 125.98 7.87 62.99 0.00 334540.55 23107.51 273406.48 00:20:20.038 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme2n1 ended in about 1.02 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme2n1 : 1.02 192.29 12.02 62.79 0.00 241997.19 18641.35 257872.02 00:20:20.038 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme3n1 ended in about 1.00 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme3n1 : 1.00 192.00 12.00 64.00 0.00 235164.16 7524.50 262532.36 00:20:20.038 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme4n1 ended in about 1.00 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme4n1 : 1.00 191.79 11.99 63.93 0.00 229624.60 20194.80 234570.33 00:20:20.038 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme5n1 ended in about 1.02 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme5n1 : 1.02 125.18 7.82 62.59 0.00 305612.93 22330.79 284280.60 00:20:20.038 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme6n1 ended in about 1.00 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme6n1 : 1.00 191.57 11.97 63.86 0.00 218386.77 13204.29 274959.93 00:20:20.038 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme7n1 ended in about 1.03 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme7n1 : 1.03 124.79 7.80 62.40 0.00 291468.58 17767.54 262532.36 00:20:20.038 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme8n1 ended in about 1.04 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme8n1 : 1.04 188.80 11.80 61.65 0.00 212667.87 18155.90 259425.47 00:20:20.038 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme9n1 ended in about 1.04 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme9n1 : 1.04 122.91 7.68 61.45 0.00 281635.65 22330.79 262532.36 00:20:20.038 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.038 Job: Nvme10n1 ended in about 1.05 seconds with error 00:20:20.038 Verification LBA range: start 0x0 length 0x400 00:20:20.038 Nvme10n1 : 1.05 122.41 7.65 61.20 0.00 275460.30 20583.16 285834.05 00:20:20.038 =================================================================================================================== 00:20:20.038 Total : 1577.72 98.61 626.86 0.00 257535.46 7524.50 285834.05 00:20:20.038 [2024-07-15 23:23:35.090353] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:20.039 [2024-07-15 23:23:35.090433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:20.039 [2024-07-15 23:23:35.090859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.090895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd0d90 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.090917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0d90 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.091047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.091073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9cac0 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.091097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9cac0 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.091285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.091317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd001a0 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.091334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd001a0 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.091484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.091509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda15c0 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.091526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda15c0 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.093475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:20.039 [2024-07-15 23:23:35.093506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:20.039 [2024-07-15 23:23:35.093753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.093781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe91040 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.093798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91040 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.093908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.093936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97a20 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.093965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97a20 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.094101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.094127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe85520 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.094142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85520 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.094178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0d90 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.094201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9cac0 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.094219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd001a0 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.094236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda15c0 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.094280] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.039 [2024-07-15 23:23:35.094305] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.039 [2024-07-15 23:23:35.094327] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.039 [2024-07-15 23:23:35.094345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.039 [2024-07-15 23:23:35.094362] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:20.039 [2024-07-15 23:23:35.094458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:20.039 [2024-07-15 23:23:35.094750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.094777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d2610 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.094793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2610 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.094922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.094948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf4d50 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.094963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4d50 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.094991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91040 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97a20 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe85520 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.039 [2024-07-15 23:23:35.095579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfca40 with addr=10.0.0.2, port=4420 00:20:20.039 [2024-07-15 23:23:35.095595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfca40 is same with the state(5) to be set 00:20:20.039 [2024-07-15 23:23:35.095612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2610 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4d50 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.095854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfca40 (9): Bad file descriptor 00:20:20.039 [2024-07-15 23:23:35.095871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.095932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.095944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:20.039 [2024-07-15 23:23:35.095984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.096002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.039 [2024-07-15 23:23:35.096014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:20.039 [2024-07-15 23:23:35.096034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:20.039 [2024-07-15 23:23:35.096047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:20.039 [2024-07-15 23:23:35.096084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.298 23:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:20.298 23:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2384077 00:20:21.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2384077) - No such process 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.669 rmmod nvme_tcp 00:20:21.669 rmmod nvme_fabrics 00:20:21.669 rmmod nvme_keyring 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.669 23:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.566 23:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.566 00:20:23.566 real 0m8.655s 00:20:23.566 user 0m23.758s 00:20:23.566 sys 0m1.515s 00:20:23.566 23:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.566 23:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:23.566 ************************************ 00:20:23.566 END TEST nvmf_shutdown_tc3 00:20:23.566 ************************************ 00:20:23.566 23:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:23.566 23:23:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:23.566 00:20:23.566 real 0m29.452s 00:20:23.566 user 1m25.896s 00:20:23.566 sys 0m6.397s 00:20:23.567 23:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.567 23:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 ************************************ 00:20:23.567 END TEST nvmf_shutdown 00:20:23.567 ************************************ 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.567 23:23:38 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 23:23:38 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 23:23:38 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:23.567 23:23:38 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.567 23:23:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 ************************************ 00:20:23.567 START TEST nvmf_multicontroller 00:20:23.567 ************************************ 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:23.567 * Looking for test storage... 00:20:23.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.567 23:23:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.099 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:26.100 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:26.100 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:26.100 Found net devices under 0000:84:00.0: cvl_0_0 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:26.100 Found net devices under 0000:84:00.1: cvl_0_1 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:20:26.100 00:20:26.100 --- 10.0.0.2 ping statistics --- 00:20:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.100 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:26.100 00:20:26.100 --- 10.0.0.1 ping statistics --- 00:20:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.100 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.100 23:23:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2386616 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2386616 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2386616 ']' 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.100 [2024-07-15 23:23:41.064396] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:26.100 [2024-07-15 23:23:41.064493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.100 [2024-07-15 23:23:41.131839] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:26.100 [2024-07-15 23:23:41.243170] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.100 [2024-07-15 23:23:41.243233] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.100 [2024-07-15 23:23:41.243261] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.100 [2024-07-15 23:23:41.243273] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.100 [2024-07-15 23:23:41.243283] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.100 [2024-07-15 23:23:41.243371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.100 [2024-07-15 23:23:41.243436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.100 [2024-07-15 23:23:41.243439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.100 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 [2024-07-15 23:23:41.395812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.101 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.101 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 Malloc0 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 [2024-07-15 23:23:41.458969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 [2024-07-15 23:23:41.466854] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 Malloc1 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2386644 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2386644 /var/tmp/bdevperf.sock 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2386644 ']' 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.359 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.360 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.360 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.618 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.618 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:26.618 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:26.618 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.618 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.876 NVMe0n1 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.876 1 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.876 request: 00:20:26.876 { 00:20:26.876 "name": "NVMe0", 00:20:26.876 "trtype": "tcp", 00:20:26.876 "traddr": "10.0.0.2", 00:20:26.876 "adrfam": "ipv4", 00:20:26.876 "trsvcid": "4420", 00:20:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.876 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:26.876 "hostaddr": "10.0.0.2", 00:20:26.876 "hostsvcid": "60000", 00:20:26.876 "prchk_reftag": false, 00:20:26.876 "prchk_guard": false, 00:20:26.876 "hdgst": false, 00:20:26.876 "ddgst": false, 00:20:26.876 "method": "bdev_nvme_attach_controller", 00:20:26.876 "req_id": 1 00:20:26.876 } 00:20:26.876 Got JSON-RPC error response 00:20:26.876 response: 00:20:26.876 { 00:20:26.876 "code": -114, 00:20:26.876 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:26.876 } 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.876 23:23:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.876 request: 00:20:26.876 { 00:20:26.876 "name": "NVMe0", 00:20:26.876 "trtype": "tcp", 00:20:26.876 "traddr": "10.0.0.2", 00:20:26.876 "adrfam": "ipv4", 00:20:26.876 "trsvcid": "4420", 00:20:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.876 "hostaddr": "10.0.0.2", 00:20:26.876 "hostsvcid": "60000", 00:20:26.876 "prchk_reftag": false, 00:20:26.876 "prchk_guard": false, 00:20:26.876 "hdgst": false, 00:20:26.876 "ddgst": false, 00:20:26.876 "method": "bdev_nvme_attach_controller", 00:20:26.876 "req_id": 1 00:20:26.876 } 00:20:26.876 Got JSON-RPC error response 00:20:26.876 response: 00:20:26.876 { 00:20:26.876 "code": -114, 00:20:26.877 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:26.877 } 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.877 request: 00:20:26.877 { 00:20:26.877 "name": "NVMe0", 00:20:26.877 "trtype": "tcp", 00:20:26.877 "traddr": "10.0.0.2", 00:20:26.877 "adrfam": "ipv4", 00:20:26.877 "trsvcid": "4420", 00:20:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.877 "hostaddr": "10.0.0.2", 00:20:26.877 "hostsvcid": "60000", 00:20:26.877 "prchk_reftag": false, 00:20:26.877 "prchk_guard": false, 00:20:26.877 "hdgst": false, 00:20:26.877 "ddgst": false, 00:20:26.877 "multipath": "disable", 00:20:26.877 "method": "bdev_nvme_attach_controller", 00:20:26.877 "req_id": 1 00:20:26.877 } 00:20:26.877 Got JSON-RPC error response 00:20:26.877 response: 00:20:26.877 { 00:20:26.877 "code": -114, 00:20:26.877 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:26.877 } 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.877 request: 00:20:26.877 { 00:20:26.877 "name": "NVMe0", 00:20:26.877 "trtype": "tcp", 00:20:26.877 "traddr": "10.0.0.2", 00:20:26.877 "adrfam": "ipv4", 00:20:26.877 "trsvcid": "4420", 00:20:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.877 "hostaddr": "10.0.0.2", 00:20:26.877 "hostsvcid": "60000", 00:20:26.877 "prchk_reftag": false, 00:20:26.877 "prchk_guard": false, 00:20:26.877 "hdgst": false, 00:20:26.877 "ddgst": false, 00:20:26.877 "multipath": "failover", 00:20:26.877 "method": "bdev_nvme_attach_controller", 00:20:26.877 "req_id": 1 00:20:26.877 } 00:20:26.877 Got JSON-RPC error response 00:20:26.877 response: 00:20:26.877 { 00:20:26.877 "code": -114, 00:20:26.877 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:26.877 } 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.877 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.134 00:20:27.134 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.134 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:27.134 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.134 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.134 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.135 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:27.135 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.135 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.391 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:27.391 23:23:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.322 0 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2386644 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2386644 ']' 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2386644 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386644 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386644' 00:20:28.579 killing process with pid 2386644 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2386644 00:20:28.579 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2386644 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:28.836 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:28.836 [2024-07-15 23:23:41.574318] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:28.836 [2024-07-15 23:23:41.574440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386644 ] 00:20:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.836 [2024-07-15 23:23:41.640344] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.836 [2024-07-15 23:23:41.750822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.836 [2024-07-15 23:23:42.489278] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b01366f4-75d3-44db-90eb-b063463fdcd3 already exists 00:20:28.836 [2024-07-15 23:23:42.489315] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b01366f4-75d3-44db-90eb-b063463fdcd3 alias for bdev NVMe1n1 00:20:28.836 [2024-07-15 23:23:42.489345] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:28.836 Running I/O for 1 seconds... 00:20:28.836 00:20:28.836 Latency(us) 00:20:28.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.836 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:28.836 NVMe0n1 : 1.01 19042.20 74.38 0.00 0.00 6711.51 2148.12 11990.66 00:20:28.836 =================================================================================================================== 00:20:28.836 Total : 19042.20 74.38 0.00 0.00 6711.51 2148.12 11990.66 00:20:28.836 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.836 00:20:28.836 Latency(us) 00:20:28.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.836 =================================================================================================================== 00:20:28.836 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.836 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.836 23:23:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.836 rmmod nvme_tcp 00:20:28.836 rmmod nvme_fabrics 00:20:28.836 rmmod nvme_keyring 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2386616 ']' 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2386616 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2386616 ']' 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2386616 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386616 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386616' 00:20:28.836 killing process with pid 2386616 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2386616 00:20:28.836 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2386616 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.094 23:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.623 23:23:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.623 00:20:31.623 real 0m7.614s 00:20:31.623 user 0m12.238s 00:20:31.623 sys 0m2.372s 00:20:31.623 23:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.623 23:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 ************************************ 00:20:31.623 END TEST nvmf_multicontroller 00:20:31.623 ************************************ 00:20:31.623 23:23:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:31.623 23:23:46 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:31.623 23:23:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:31.623 23:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.623 23:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 ************************************ 00:20:31.623 START TEST nvmf_aer 00:20:31.623 ************************************ 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:31.623 * Looking for test storage... 00:20:31.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.623 23:23:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.523 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:33.524 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:33.524 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:33.524 Found net devices under 0000:84:00.0: cvl_0_0 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:33.524 Found net devices under 0000:84:00.1: cvl_0_1 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:20:33.524 00:20:33.524 --- 10.0.0.2 ping statistics --- 00:20:33.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.524 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:33.524 00:20:33.524 --- 10.0.0.1 ping statistics --- 00:20:33.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.524 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2388978 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2388978 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2388978 ']' 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.524 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.524 [2024-07-15 23:23:48.649585] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:33.524 [2024-07-15 23:23:48.649685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.524 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.524 [2024-07-15 23:23:48.714969] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.524 [2024-07-15 23:23:48.827430] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.524 [2024-07-15 23:23:48.827470] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.524 [2024-07-15 23:23:48.827507] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.524 [2024-07-15 23:23:48.827519] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.524 [2024-07-15 23:23:48.827528] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.524 [2024-07-15 23:23:48.827620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.524 [2024-07-15 23:23:48.827938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.524 [2024-07-15 23:23:48.827972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.524 [2024-07-15 23:23:48.827975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 [2024-07-15 23:23:48.969369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 Malloc0 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 [2024-07-15 23:23:49.020519] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 [ 00:20:33.782 { 00:20:33.782 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.782 "subtype": "Discovery", 00:20:33.782 "listen_addresses": [], 00:20:33.782 "allow_any_host": true, 00:20:33.782 "hosts": [] 00:20:33.782 }, 00:20:33.782 { 00:20:33.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.782 "subtype": "NVMe", 00:20:33.782 "listen_addresses": [ 00:20:33.782 { 00:20:33.782 "trtype": "TCP", 00:20:33.782 "adrfam": "IPv4", 00:20:33.782 "traddr": "10.0.0.2", 00:20:33.782 "trsvcid": "4420" 00:20:33.782 } 00:20:33.782 ], 00:20:33.782 "allow_any_host": true, 00:20:33.782 "hosts": [], 00:20:33.782 "serial_number": "SPDK00000000000001", 00:20:33.782 "model_number": "SPDK bdev Controller", 00:20:33.782 "max_namespaces": 2, 00:20:33.782 "min_cntlid": 1, 00:20:33.782 "max_cntlid": 65519, 00:20:33.782 "namespaces": [ 00:20:33.782 { 00:20:33.782 "nsid": 1, 00:20:33.782 "bdev_name": "Malloc0", 00:20:33.782 "name": "Malloc0", 00:20:33.782 "nguid": "86472D6F7140445F842F72132D0A3FE5", 00:20:33.782 "uuid": "86472d6f-7140-445f-842f-72132d0a3fe5" 00:20:33.782 } 00:20:33.782 ] 00:20:33.782 } 00:20:33.782 ] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2389017 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:33.782 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:33.782 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.039 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.039 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:34.039 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:34.039 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:34.039 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.040 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 Malloc1 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.297 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 Asynchronous Event Request test 00:20:34.298 Attaching to 10.0.0.2 00:20:34.298 Attached to 10.0.0.2 00:20:34.298 Registering asynchronous event callbacks... 00:20:34.298 Starting namespace attribute notice tests for all controllers... 00:20:34.298 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:34.298 aer_cb - Changed Namespace 00:20:34.298 Cleaning up... 00:20:34.298 [ 00:20:34.298 { 00:20:34.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:34.298 "subtype": "Discovery", 00:20:34.298 "listen_addresses": [], 00:20:34.298 "allow_any_host": true, 00:20:34.298 "hosts": [] 00:20:34.298 }, 00:20:34.298 { 00:20:34.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.298 "subtype": "NVMe", 00:20:34.298 "listen_addresses": [ 00:20:34.298 { 00:20:34.298 "trtype": "TCP", 00:20:34.298 "adrfam": "IPv4", 00:20:34.298 "traddr": "10.0.0.2", 00:20:34.298 "trsvcid": "4420" 00:20:34.298 } 00:20:34.298 ], 00:20:34.298 "allow_any_host": true, 00:20:34.298 "hosts": [], 00:20:34.298 "serial_number": "SPDK00000000000001", 00:20:34.298 "model_number": "SPDK bdev Controller", 00:20:34.298 "max_namespaces": 2, 00:20:34.298 "min_cntlid": 1, 00:20:34.298 "max_cntlid": 65519, 00:20:34.298 "namespaces": [ 00:20:34.298 { 00:20:34.298 "nsid": 1, 00:20:34.298 "bdev_name": "Malloc0", 00:20:34.298 "name": "Malloc0", 00:20:34.298 "nguid": "86472D6F7140445F842F72132D0A3FE5", 00:20:34.298 "uuid": "86472d6f-7140-445f-842f-72132d0a3fe5" 00:20:34.298 }, 00:20:34.298 { 00:20:34.298 "nsid": 2, 00:20:34.298 "bdev_name": "Malloc1", 00:20:34.298 "name": "Malloc1", 00:20:34.298 "nguid": "91C17C415A6C4C7EA12DF9FD5B6C5C66", 00:20:34.298 "uuid": "91c17c41-5a6c-4c7e-a12d-f9fd5b6c5c66" 00:20:34.298 } 00:20:34.298 ] 00:20:34.298 } 00:20:34.298 ] 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2389017 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.298 rmmod nvme_tcp 00:20:34.298 rmmod nvme_fabrics 00:20:34.298 rmmod nvme_keyring 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2388978 ']' 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2388978 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2388978 ']' 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2388978 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388978 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388978' 00:20:34.298 killing process with pid 2388978 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2388978 00:20:34.298 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2388978 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.556 23:23:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.086 23:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.086 00:20:37.086 real 0m5.464s 00:20:37.086 user 0m4.546s 00:20:37.086 sys 0m1.913s 00:20:37.086 23:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.086 23:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.086 ************************************ 00:20:37.086 END TEST nvmf_aer 00:20:37.086 ************************************ 00:20:37.086 23:23:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:37.086 23:23:51 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:37.086 23:23:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.086 23:23:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.086 23:23:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.086 ************************************ 00:20:37.086 START TEST nvmf_async_init 00:20:37.086 ************************************ 00:20:37.086 23:23:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:37.086 * Looking for test storage... 00:20:37.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8049a72e6d4f4e219316dba719b0bcd8 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.086 23:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:38.981 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.981 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:38.982 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:38.982 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:38.982 Found net devices under 0000:84:00.0: cvl_0_0 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:38.982 Found net devices under 0000:84:00.1: cvl_0_1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:38.982 00:20:38.982 --- 10.0.0.2 ping statistics --- 00:20:38.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.982 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:38.982 00:20:38.982 --- 10.0.0.1 ping statistics --- 00:20:38.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.982 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2391086 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2391086 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2391086 ']' 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.982 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:38.982 [2024-07-15 23:23:54.257285] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:38.982 [2024-07-15 23:23:54.257357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.240 [2024-07-15 23:23:54.320799] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.240 [2024-07-15 23:23:54.426307] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.240 [2024-07-15 23:23:54.426361] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.240 [2024-07-15 23:23:54.426375] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.240 [2024-07-15 23:23:54.426386] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.240 [2024-07-15 23:23:54.426395] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.240 [2024-07-15 23:23:54.426421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.240 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.240 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:39.240 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.240 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.240 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 [2024-07-15 23:23:54.569778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 null0 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8049a72e6d4f4e219316dba719b0bcd8 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.498 [2024-07-15 23:23:54.610015] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.498 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.756 nvme0n1 00:20:39.756 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.756 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:39.756 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.756 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.756 [ 00:20:39.756 { 00:20:39.756 "name": "nvme0n1", 00:20:39.756 "aliases": [ 00:20:39.756 "8049a72e-6d4f-4e21-9316-dba719b0bcd8" 00:20:39.756 ], 00:20:39.756 "product_name": "NVMe disk", 00:20:39.756 "block_size": 512, 00:20:39.756 "num_blocks": 2097152, 00:20:39.756 "uuid": "8049a72e-6d4f-4e21-9316-dba719b0bcd8", 00:20:39.756 "assigned_rate_limits": { 00:20:39.756 "rw_ios_per_sec": 0, 00:20:39.756 "rw_mbytes_per_sec": 0, 00:20:39.756 "r_mbytes_per_sec": 0, 00:20:39.756 "w_mbytes_per_sec": 0 00:20:39.756 }, 00:20:39.756 "claimed": false, 00:20:39.756 "zoned": false, 00:20:39.756 "supported_io_types": { 00:20:39.756 "read": true, 00:20:39.756 "write": true, 00:20:39.756 "unmap": false, 00:20:39.756 "flush": true, 00:20:39.756 "reset": true, 00:20:39.756 "nvme_admin": true, 00:20:39.756 "nvme_io": true, 00:20:39.756 "nvme_io_md": false, 00:20:39.756 "write_zeroes": true, 00:20:39.756 "zcopy": false, 00:20:39.756 "get_zone_info": false, 00:20:39.756 "zone_management": false, 00:20:39.756 "zone_append": false, 00:20:39.756 "compare": true, 00:20:39.756 "compare_and_write": true, 00:20:39.756 "abort": true, 00:20:39.756 "seek_hole": false, 00:20:39.756 "seek_data": false, 00:20:39.756 "copy": true, 00:20:39.756 "nvme_iov_md": false 00:20:39.756 }, 00:20:39.756 "memory_domains": [ 00:20:39.756 { 00:20:39.756 "dma_device_id": "system", 00:20:39.756 "dma_device_type": 1 00:20:39.756 } 00:20:39.756 ], 00:20:39.756 "driver_specific": { 00:20:39.756 "nvme": [ 00:20:39.756 { 00:20:39.756 "trid": { 00:20:39.756 "trtype": "TCP", 00:20:39.756 "adrfam": "IPv4", 00:20:39.756 "traddr": "10.0.0.2", 00:20:39.756 "trsvcid": "4420", 00:20:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:39.757 }, 00:20:39.757 "ctrlr_data": { 00:20:39.757 "cntlid": 1, 00:20:39.757 "vendor_id": "0x8086", 00:20:39.757 "model_number": "SPDK bdev Controller", 00:20:39.757 "serial_number": "00000000000000000000", 00:20:39.757 "firmware_revision": "24.09", 00:20:39.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:39.757 "oacs": { 00:20:39.757 "security": 0, 00:20:39.757 "format": 0, 00:20:39.757 "firmware": 0, 00:20:39.757 "ns_manage": 0 00:20:39.757 }, 00:20:39.757 "multi_ctrlr": true, 00:20:39.757 "ana_reporting": false 00:20:39.757 }, 00:20:39.757 "vs": { 00:20:39.757 "nvme_version": "1.3" 00:20:39.757 }, 00:20:39.757 "ns_data": { 00:20:39.757 "id": 1, 00:20:39.757 "can_share": true 00:20:39.757 } 00:20:39.757 } 00:20:39.757 ], 00:20:39.757 "mp_policy": "active_passive" 00:20:39.757 } 00:20:39.757 } 00:20:39.757 ] 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.757 [2024-07-15 23:23:54.858566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:39.757 [2024-07-15 23:23:54.858653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b1740 (9): Bad file descriptor 00:20:39.757 [2024-07-15 23:23:54.990895] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.757 23:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.757 [ 00:20:39.757 { 00:20:39.757 "name": "nvme0n1", 00:20:39.757 "aliases": [ 00:20:39.757 "8049a72e-6d4f-4e21-9316-dba719b0bcd8" 00:20:39.757 ], 00:20:39.757 "product_name": "NVMe disk", 00:20:39.757 "block_size": 512, 00:20:39.757 "num_blocks": 2097152, 00:20:39.757 "uuid": "8049a72e-6d4f-4e21-9316-dba719b0bcd8", 00:20:39.757 "assigned_rate_limits": { 00:20:39.757 "rw_ios_per_sec": 0, 00:20:39.757 "rw_mbytes_per_sec": 0, 00:20:39.757 "r_mbytes_per_sec": 0, 00:20:39.757 "w_mbytes_per_sec": 0 00:20:39.757 }, 00:20:39.757 "claimed": false, 00:20:39.757 "zoned": false, 00:20:39.757 "supported_io_types": { 00:20:39.757 "read": true, 00:20:39.757 "write": true, 00:20:39.757 "unmap": false, 00:20:39.757 "flush": true, 00:20:39.757 "reset": true, 00:20:39.757 "nvme_admin": true, 00:20:39.757 "nvme_io": true, 00:20:39.757 "nvme_io_md": false, 00:20:39.757 "write_zeroes": true, 00:20:39.757 "zcopy": false, 00:20:39.757 "get_zone_info": false, 00:20:39.757 "zone_management": false, 00:20:39.757 "zone_append": false, 00:20:39.757 "compare": true, 00:20:39.757 "compare_and_write": true, 00:20:39.757 "abort": true, 00:20:39.757 "seek_hole": false, 00:20:39.757 "seek_data": false, 00:20:39.757 "copy": true, 00:20:39.757 "nvme_iov_md": false 00:20:39.757 }, 00:20:39.757 "memory_domains": [ 00:20:39.757 { 00:20:39.757 "dma_device_id": "system", 00:20:39.757 "dma_device_type": 1 00:20:39.757 } 00:20:39.757 ], 00:20:39.757 "driver_specific": { 00:20:39.757 "nvme": [ 00:20:39.757 { 00:20:39.757 "trid": { 00:20:39.757 "trtype": "TCP", 00:20:39.757 "adrfam": "IPv4", 00:20:39.757 "traddr": "10.0.0.2", 00:20:39.757 "trsvcid": "4420", 00:20:39.757 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:39.757 }, 00:20:39.757 "ctrlr_data": { 00:20:39.757 "cntlid": 2, 00:20:39.757 "vendor_id": "0x8086", 00:20:39.757 "model_number": "SPDK bdev Controller", 00:20:39.757 "serial_number": "00000000000000000000", 00:20:39.757 "firmware_revision": "24.09", 00:20:39.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:39.757 "oacs": { 00:20:39.757 "security": 0, 00:20:39.757 "format": 0, 00:20:39.757 "firmware": 0, 00:20:39.757 "ns_manage": 0 00:20:39.757 }, 00:20:39.757 "multi_ctrlr": true, 00:20:39.757 "ana_reporting": false 00:20:39.757 }, 00:20:39.757 "vs": { 00:20:39.757 "nvme_version": "1.3" 00:20:39.757 }, 00:20:39.757 "ns_data": { 00:20:39.757 "id": 1, 00:20:39.757 "can_share": true 00:20:39.757 } 00:20:39.757 } 00:20:39.757 ], 00:20:39.757 "mp_policy": "active_passive" 00:20:39.757 } 00:20:39.757 } 00:20:39.757 ] 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BbgKiwfCfs 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BbgKiwfCfs 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.757 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.758 [2024-07-15 23:23:55.039219] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.758 [2024-07-15 23:23:55.039372] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BbgKiwfCfs 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.758 [2024-07-15 23:23:55.047221] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BbgKiwfCfs 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.758 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:39.758 [2024-07-15 23:23:55.055259] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.758 [2024-07-15 23:23:55.055326] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.016 nvme0n1 00:20:40.016 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:40.017 [ 00:20:40.017 { 00:20:40.017 "name": "nvme0n1", 00:20:40.017 "aliases": [ 00:20:40.017 "8049a72e-6d4f-4e21-9316-dba719b0bcd8" 00:20:40.017 ], 00:20:40.017 "product_name": "NVMe disk", 00:20:40.017 "block_size": 512, 00:20:40.017 "num_blocks": 2097152, 00:20:40.017 "uuid": "8049a72e-6d4f-4e21-9316-dba719b0bcd8", 00:20:40.017 "assigned_rate_limits": { 00:20:40.017 "rw_ios_per_sec": 0, 00:20:40.017 "rw_mbytes_per_sec": 0, 00:20:40.017 "r_mbytes_per_sec": 0, 00:20:40.017 "w_mbytes_per_sec": 0 00:20:40.017 }, 00:20:40.017 "claimed": false, 00:20:40.017 "zoned": false, 00:20:40.017 "supported_io_types": { 00:20:40.017 "read": true, 00:20:40.017 "write": true, 00:20:40.017 "unmap": false, 00:20:40.017 "flush": true, 00:20:40.017 "reset": true, 00:20:40.017 "nvme_admin": true, 00:20:40.017 "nvme_io": true, 00:20:40.017 "nvme_io_md": false, 00:20:40.017 "write_zeroes": true, 00:20:40.017 "zcopy": false, 00:20:40.017 "get_zone_info": false, 00:20:40.017 "zone_management": false, 00:20:40.017 "zone_append": false, 00:20:40.017 "compare": true, 00:20:40.017 "compare_and_write": true, 00:20:40.017 "abort": true, 00:20:40.017 "seek_hole": false, 00:20:40.017 "seek_data": false, 00:20:40.017 "copy": true, 00:20:40.017 "nvme_iov_md": false 00:20:40.017 }, 00:20:40.017 "memory_domains": [ 00:20:40.017 { 00:20:40.017 "dma_device_id": "system", 00:20:40.017 "dma_device_type": 1 00:20:40.017 } 00:20:40.017 ], 00:20:40.017 "driver_specific": { 00:20:40.017 "nvme": [ 00:20:40.017 { 00:20:40.017 "trid": { 00:20:40.017 "trtype": "TCP", 00:20:40.017 "adrfam": "IPv4", 00:20:40.017 "traddr": "10.0.0.2", 00:20:40.017 "trsvcid": "4421", 00:20:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:40.017 }, 00:20:40.017 "ctrlr_data": { 00:20:40.017 "cntlid": 3, 00:20:40.017 "vendor_id": "0x8086", 00:20:40.017 "model_number": "SPDK bdev Controller", 00:20:40.017 "serial_number": "00000000000000000000", 00:20:40.017 "firmware_revision": "24.09", 00:20:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.017 "oacs": { 00:20:40.017 "security": 0, 00:20:40.017 "format": 0, 00:20:40.017 "firmware": 0, 00:20:40.017 "ns_manage": 0 00:20:40.017 }, 00:20:40.017 "multi_ctrlr": true, 00:20:40.017 "ana_reporting": false 00:20:40.017 }, 00:20:40.017 "vs": { 00:20:40.017 "nvme_version": "1.3" 00:20:40.017 }, 00:20:40.017 "ns_data": { 00:20:40.017 "id": 1, 00:20:40.017 "can_share": true 00:20:40.017 } 00:20:40.017 } 00:20:40.017 ], 00:20:40.017 "mp_policy": "active_passive" 00:20:40.017 } 00:20:40.017 } 00:20:40.017 ] 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.BbgKiwfCfs 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.017 rmmod nvme_tcp 00:20:40.017 rmmod nvme_fabrics 00:20:40.017 rmmod nvme_keyring 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2391086 ']' 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2391086 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2391086 ']' 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2391086 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391086 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391086' 00:20:40.017 killing process with pid 2391086 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2391086 00:20:40.017 [2024-07-15 23:23:55.257479] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.017 [2024-07-15 23:23:55.257521] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:40.017 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2391086 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.312 23:23:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.844 23:23:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.844 00:20:42.844 real 0m5.608s 00:20:42.844 user 0m2.123s 00:20:42.844 sys 0m1.850s 00:20:42.844 23:23:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.844 23:23:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.844 ************************************ 00:20:42.844 END TEST nvmf_async_init 00:20:42.844 ************************************ 00:20:42.844 23:23:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:42.844 23:23:57 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:42.844 23:23:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.844 23:23:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.844 23:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.844 ************************************ 00:20:42.844 START TEST dma 00:20:42.844 ************************************ 00:20:42.844 23:23:57 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:42.844 * Looking for test storage... 00:20:42.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.844 23:23:57 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.844 23:23:57 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.844 23:23:57 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.844 23:23:57 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.844 23:23:57 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.844 23:23:57 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.844 23:23:57 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.844 23:23:57 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:42.844 23:23:57 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.844 23:23:57 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.844 23:23:57 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:42.844 23:23:57 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:42.844 00:20:42.844 real 0m0.071s 00:20:42.844 user 0m0.030s 00:20:42.844 sys 0m0.045s 00:20:42.845 23:23:57 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.845 23:23:57 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:42.845 ************************************ 00:20:42.845 END TEST dma 00:20:42.845 ************************************ 00:20:42.845 23:23:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:42.845 23:23:57 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:42.845 23:23:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.845 23:23:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.845 23:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.845 ************************************ 00:20:42.845 START TEST nvmf_identify 00:20:42.845 ************************************ 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:42.845 * Looking for test storage... 00:20:42.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.845 23:23:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:44.746 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:44.747 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:44.747 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:44.747 Found net devices under 0000:84:00.0: cvl_0_0 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:44.747 Found net devices under 0000:84:00.1: cvl_0_1 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:20:44.747 00:20:44.747 --- 10.0.0.2 ping statistics --- 00:20:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.747 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:20:44.747 00:20:44.747 --- 10.0.0.1 ping statistics --- 00:20:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.747 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2393226 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2393226 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2393226 ']' 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.747 23:23:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.747 [2024-07-15 23:23:59.860514] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:44.747 [2024-07-15 23:23:59.860586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.747 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.747 [2024-07-15 23:23:59.923584] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.747 [2024-07-15 23:24:00.039309] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.747 [2024-07-15 23:24:00.039357] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.747 [2024-07-15 23:24:00.039370] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.747 [2024-07-15 23:24:00.039381] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.747 [2024-07-15 23:24:00.039392] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.747 [2024-07-15 23:24:00.039537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.747 [2024-07-15 23:24:00.042756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.747 [2024-07-15 23:24:00.042793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.747 [2024-07-15 23:24:00.042797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 [2024-07-15 23:24:00.814492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 Malloc0 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 [2024-07-15 23:24:00.886676] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.684 [ 00:20:45.684 { 00:20:45.684 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.684 "subtype": "Discovery", 00:20:45.684 "listen_addresses": [ 00:20:45.684 { 00:20:45.684 "trtype": "TCP", 00:20:45.684 "adrfam": "IPv4", 00:20:45.684 "traddr": "10.0.0.2", 00:20:45.684 "trsvcid": "4420" 00:20:45.684 } 00:20:45.684 ], 00:20:45.684 "allow_any_host": true, 00:20:45.684 "hosts": [] 00:20:45.684 }, 00:20:45.684 { 00:20:45.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.684 "subtype": "NVMe", 00:20:45.684 "listen_addresses": [ 00:20:45.684 { 00:20:45.684 "trtype": "TCP", 00:20:45.684 "adrfam": "IPv4", 00:20:45.684 "traddr": "10.0.0.2", 00:20:45.684 "trsvcid": "4420" 00:20:45.684 } 00:20:45.684 ], 00:20:45.684 "allow_any_host": true, 00:20:45.684 "hosts": [], 00:20:45.684 "serial_number": "SPDK00000000000001", 00:20:45.684 "model_number": "SPDK bdev Controller", 00:20:45.684 "max_namespaces": 32, 00:20:45.684 "min_cntlid": 1, 00:20:45.684 "max_cntlid": 65519, 00:20:45.684 "namespaces": [ 00:20:45.684 { 00:20:45.684 "nsid": 1, 00:20:45.684 "bdev_name": "Malloc0", 00:20:45.684 "name": "Malloc0", 00:20:45.684 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:45.684 "eui64": "ABCDEF0123456789", 00:20:45.684 "uuid": "dffdb32b-0eae-4f2b-9b50-803b37bc7e45" 00:20:45.684 } 00:20:45.684 ] 00:20:45.684 } 00:20:45.684 ] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.684 23:24:00 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:45.684 [2024-07-15 23:24:00.924927] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:45.684 [2024-07-15 23:24:00.924971] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393433 ] 00:20:45.684 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.684 [2024-07-15 23:24:00.957403] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:45.684 [2024-07-15 23:24:00.957465] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:45.684 [2024-07-15 23:24:00.957475] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:45.684 [2024-07-15 23:24:00.957493] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:45.684 [2024-07-15 23:24:00.957504] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:45.684 [2024-07-15 23:24:00.961199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:45.684 [2024-07-15 23:24:00.961258] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c4e6e0 0 00:20:45.684 [2024-07-15 23:24:00.968756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:45.684 [2024-07-15 23:24:00.968791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:45.684 [2024-07-15 23:24:00.968800] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:45.684 [2024-07-15 23:24:00.968806] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:45.684 [2024-07-15 23:24:00.968854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.684 [2024-07-15 23:24:00.968867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.684 [2024-07-15 23:24:00.968875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.684 [2024-07-15 23:24:00.968894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:45.684 [2024-07-15 23:24:00.968927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.684 [2024-07-15 23:24:00.975751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.684 [2024-07-15 23:24:00.975769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.684 [2024-07-15 23:24:00.975777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.684 [2024-07-15 23:24:00.975784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.684 [2024-07-15 23:24:00.975813] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:45.684 [2024-07-15 23:24:00.975825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:45.684 [2024-07-15 23:24:00.975834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:45.684 [2024-07-15 23:24:00.975857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.684 [2024-07-15 23:24:00.975866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.684 [2024-07-15 23:24:00.975873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.684 [2024-07-15 23:24:00.975885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.684 [2024-07-15 23:24:00.975909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.684 [2024-07-15 23:24:00.976056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.684 [2024-07-15 23:24:00.976071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.684 [2024-07-15 23:24:00.976077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.976096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:45.685 [2024-07-15 23:24:00.976110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:45.685 [2024-07-15 23:24:00.976122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.976145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.976166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.976268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.976282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.976288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.976302] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:45.685 [2024-07-15 23:24:00.976316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.976327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.976350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.976374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.976470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.976484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.976490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.976505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.976521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.976545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.976564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.976663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.976677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.976683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.976697] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:45.685 [2024-07-15 23:24:00.976705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.976718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.976852] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:45.685 [2024-07-15 23:24:00.976861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.976877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.976891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.976901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.976923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.977060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.977074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.977081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.977110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:45.685 [2024-07-15 23:24:00.977127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.977155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.977176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.977293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.977305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.977311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.977325] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:45.685 [2024-07-15 23:24:00.977333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:45.685 [2024-07-15 23:24:00.977346] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:45.685 [2024-07-15 23:24:00.977359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:45.685 [2024-07-15 23:24:00.977376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.685 [2024-07-15 23:24:00.977393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.685 [2024-07-15 23:24:00.977413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.685 [2024-07-15 23:24:00.977560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.685 [2024-07-15 23:24:00.977574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.685 [2024-07-15 23:24:00.977580] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977587] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c4e6e0): datao=0, datal=4096, cccid=0 00:20:45.685 [2024-07-15 23:24:00.977595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cae540) on tqpair(0x1c4e6e0): expected_datao=0, payload_size=4096 00:20:45.685 [2024-07-15 23:24:00.977601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977612] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977620] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.685 [2024-07-15 23:24:00.977641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.685 [2024-07-15 23:24:00.977646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.685 [2024-07-15 23:24:00.977653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.685 [2024-07-15 23:24:00.977664] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:45.685 [2024-07-15 23:24:00.977673] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:45.685 [2024-07-15 23:24:00.977680] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:45.685 [2024-07-15 23:24:00.977688] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:45.685 [2024-07-15 23:24:00.977696] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:45.686 [2024-07-15 23:24:00.977703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:45.686 [2024-07-15 23:24:00.977748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:45.686 [2024-07-15 23:24:00.977770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.977779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.977785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.977796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:45.686 [2024-07-15 23:24:00.977819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.686 [2024-07-15 23:24:00.977931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.686 [2024-07-15 23:24:00.977942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.686 [2024-07-15 23:24:00.977949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.977956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.686 [2024-07-15 23:24:00.977968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.977975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.977982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.977992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.686 [2024-07-15 23:24:00.978002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.686 [2024-07-15 23:24:00.978048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.686 [2024-07-15 23:24:00.978077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.686 [2024-07-15 23:24:00.978105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:45.686 [2024-07-15 23:24:00.978124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:45.686 [2024-07-15 23:24:00.978136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.686 [2024-07-15 23:24:00.978184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae540, cid 0, qid 0 00:20:45.686 [2024-07-15 23:24:00.978194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae6c0, cid 1, qid 0 00:20:45.686 [2024-07-15 23:24:00.978205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae840, cid 2, qid 0 00:20:45.686 [2024-07-15 23:24:00.978213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.686 [2024-07-15 23:24:00.978220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caeb40, cid 4, qid 0 00:20:45.686 [2024-07-15 23:24:00.978365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.686 [2024-07-15 23:24:00.978376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.686 [2024-07-15 23:24:00.978382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caeb40) on tqpair=0x1c4e6e0 00:20:45.686 [2024-07-15 23:24:00.978397] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:45.686 [2024-07-15 23:24:00.978406] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:45.686 [2024-07-15 23:24:00.978439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.686 [2024-07-15 23:24:00.978479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caeb40, cid 4, qid 0 00:20:45.686 [2024-07-15 23:24:00.978617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.686 [2024-07-15 23:24:00.978628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.686 [2024-07-15 23:24:00.978634] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978641] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c4e6e0): datao=0, datal=4096, cccid=4 00:20:45.686 [2024-07-15 23:24:00.978648] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1caeb40) on tqpair(0x1c4e6e0): expected_datao=0, payload_size=4096 00:20:45.686 [2024-07-15 23:24:00.978655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978664] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978680] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.686 [2024-07-15 23:24:00.978730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.686 [2024-07-15 23:24:00.978746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caeb40) on tqpair=0x1c4e6e0 00:20:45.686 [2024-07-15 23:24:00.978787] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:45.686 [2024-07-15 23:24:00.978825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.686 [2024-07-15 23:24:00.978859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.978872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c4e6e0) 00:20:45.686 [2024-07-15 23:24:00.978881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.686 [2024-07-15 23:24:00.978908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caeb40, cid 4, qid 0 00:20:45.686 [2024-07-15 23:24:00.978920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caecc0, cid 5, qid 0 00:20:45.686 [2024-07-15 23:24:00.979080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.686 [2024-07-15 23:24:00.979095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.686 [2024-07-15 23:24:00.979101] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.979107] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c4e6e0): datao=0, datal=1024, cccid=4 00:20:45.686 [2024-07-15 23:24:00.979114] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1caeb40) on tqpair(0x1c4e6e0): expected_datao=0, payload_size=1024 00:20:45.686 [2024-07-15 23:24:00.979121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.979130] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.979137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.979145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.686 [2024-07-15 23:24:00.979153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.686 [2024-07-15 23:24:00.979159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.686 [2024-07-15 23:24:00.979166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caecc0) on tqpair=0x1c4e6e0 00:20:45.952 [2024-07-15 23:24:01.023750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.952 [2024-07-15 23:24:01.023770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.952 [2024-07-15 23:24:01.023777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.023784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caeb40) on tqpair=0x1c4e6e0 00:20:45.952 [2024-07-15 23:24:01.023809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.023820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c4e6e0) 00:20:45.952 [2024-07-15 23:24:01.023831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.952 [2024-07-15 23:24:01.023862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caeb40, cid 4, qid 0 00:20:45.952 [2024-07-15 23:24:01.023997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.952 [2024-07-15 23:24:01.024010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.952 [2024-07-15 23:24:01.024017] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024023] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c4e6e0): datao=0, datal=3072, cccid=4 00:20:45.952 [2024-07-15 23:24:01.024030] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1caeb40) on tqpair(0x1c4e6e0): expected_datao=0, payload_size=3072 00:20:45.952 [2024-07-15 23:24:01.024038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024055] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024077] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.952 [2024-07-15 23:24:01.024099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.952 [2024-07-15 23:24:01.024105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caeb40) on tqpair=0x1c4e6e0 00:20:45.952 [2024-07-15 23:24:01.024127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c4e6e0) 00:20:45.952 [2024-07-15 23:24:01.024145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.952 [2024-07-15 23:24:01.024174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1caeb40, cid 4, qid 0 00:20:45.952 [2024-07-15 23:24:01.024295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.952 [2024-07-15 23:24:01.024309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.952 [2024-07-15 23:24:01.024316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024322] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c4e6e0): datao=0, datal=8, cccid=4 00:20:45.952 [2024-07-15 23:24:01.024329] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1caeb40) on tqpair(0x1c4e6e0): expected_datao=0, payload_size=8 00:20:45.952 [2024-07-15 23:24:01.024336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.024352] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.064877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.952 [2024-07-15 23:24:01.064897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.952 [2024-07-15 23:24:01.064904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.952 [2024-07-15 23:24:01.064911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1caeb40) on tqpair=0x1c4e6e0 00:20:45.952 ===================================================== 00:20:45.952 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:45.952 ===================================================== 00:20:45.952 Controller Capabilities/Features 00:20:45.952 ================================ 00:20:45.952 Vendor ID: 0000 00:20:45.952 Subsystem Vendor ID: 0000 00:20:45.952 Serial Number: .................... 00:20:45.952 Model Number: ........................................ 00:20:45.952 Firmware Version: 24.09 00:20:45.952 Recommended Arb Burst: 0 00:20:45.952 IEEE OUI Identifier: 00 00 00 00:20:45.952 Multi-path I/O 00:20:45.952 May have multiple subsystem ports: No 00:20:45.952 May have multiple controllers: No 00:20:45.952 Associated with SR-IOV VF: No 00:20:45.952 Max Data Transfer Size: 131072 00:20:45.952 Max Number of Namespaces: 0 00:20:45.952 Max Number of I/O Queues: 1024 00:20:45.952 NVMe Specification Version (VS): 1.3 00:20:45.952 NVMe Specification Version (Identify): 1.3 00:20:45.952 Maximum Queue Entries: 128 00:20:45.952 Contiguous Queues Required: Yes 00:20:45.952 Arbitration Mechanisms Supported 00:20:45.952 Weighted Round Robin: Not Supported 00:20:45.952 Vendor Specific: Not Supported 00:20:45.952 Reset Timeout: 15000 ms 00:20:45.952 Doorbell Stride: 4 bytes 00:20:45.952 NVM Subsystem Reset: Not Supported 00:20:45.952 Command Sets Supported 00:20:45.952 NVM Command Set: Supported 00:20:45.952 Boot Partition: Not Supported 00:20:45.952 Memory Page Size Minimum: 4096 bytes 00:20:45.952 Memory Page Size Maximum: 4096 bytes 00:20:45.952 Persistent Memory Region: Not Supported 00:20:45.952 Optional Asynchronous Events Supported 00:20:45.952 Namespace Attribute Notices: Not Supported 00:20:45.952 Firmware Activation Notices: Not Supported 00:20:45.952 ANA Change Notices: Not Supported 00:20:45.952 PLE Aggregate Log Change Notices: Not Supported 00:20:45.952 LBA Status Info Alert Notices: Not Supported 00:20:45.952 EGE Aggregate Log Change Notices: Not Supported 00:20:45.952 Normal NVM Subsystem Shutdown event: Not Supported 00:20:45.952 Zone Descriptor Change Notices: Not Supported 00:20:45.952 Discovery Log Change Notices: Supported 00:20:45.952 Controller Attributes 00:20:45.952 128-bit Host Identifier: Not Supported 00:20:45.952 Non-Operational Permissive Mode: Not Supported 00:20:45.952 NVM Sets: Not Supported 00:20:45.952 Read Recovery Levels: Not Supported 00:20:45.952 Endurance Groups: Not Supported 00:20:45.952 Predictable Latency Mode: Not Supported 00:20:45.952 Traffic Based Keep ALive: Not Supported 00:20:45.952 Namespace Granularity: Not Supported 00:20:45.952 SQ Associations: Not Supported 00:20:45.952 UUID List: Not Supported 00:20:45.952 Multi-Domain Subsystem: Not Supported 00:20:45.952 Fixed Capacity Management: Not Supported 00:20:45.952 Variable Capacity Management: Not Supported 00:20:45.952 Delete Endurance Group: Not Supported 00:20:45.952 Delete NVM Set: Not Supported 00:20:45.952 Extended LBA Formats Supported: Not Supported 00:20:45.952 Flexible Data Placement Supported: Not Supported 00:20:45.952 00:20:45.953 Controller Memory Buffer Support 00:20:45.953 ================================ 00:20:45.953 Supported: No 00:20:45.953 00:20:45.953 Persistent Memory Region Support 00:20:45.953 ================================ 00:20:45.953 Supported: No 00:20:45.953 00:20:45.953 Admin Command Set Attributes 00:20:45.953 ============================ 00:20:45.953 Security Send/Receive: Not Supported 00:20:45.953 Format NVM: Not Supported 00:20:45.953 Firmware Activate/Download: Not Supported 00:20:45.953 Namespace Management: Not Supported 00:20:45.953 Device Self-Test: Not Supported 00:20:45.953 Directives: Not Supported 00:20:45.953 NVMe-MI: Not Supported 00:20:45.953 Virtualization Management: Not Supported 00:20:45.953 Doorbell Buffer Config: Not Supported 00:20:45.953 Get LBA Status Capability: Not Supported 00:20:45.953 Command & Feature Lockdown Capability: Not Supported 00:20:45.953 Abort Command Limit: 1 00:20:45.953 Async Event Request Limit: 4 00:20:45.953 Number of Firmware Slots: N/A 00:20:45.953 Firmware Slot 1 Read-Only: N/A 00:20:45.953 Firmware Activation Without Reset: N/A 00:20:45.953 Multiple Update Detection Support: N/A 00:20:45.953 Firmware Update Granularity: No Information Provided 00:20:45.953 Per-Namespace SMART Log: No 00:20:45.953 Asymmetric Namespace Access Log Page: Not Supported 00:20:45.953 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:45.953 Command Effects Log Page: Not Supported 00:20:45.953 Get Log Page Extended Data: Supported 00:20:45.953 Telemetry Log Pages: Not Supported 00:20:45.953 Persistent Event Log Pages: Not Supported 00:20:45.953 Supported Log Pages Log Page: May Support 00:20:45.953 Commands Supported & Effects Log Page: Not Supported 00:20:45.953 Feature Identifiers & Effects Log Page:May Support 00:20:45.953 NVMe-MI Commands & Effects Log Page: May Support 00:20:45.953 Data Area 4 for Telemetry Log: Not Supported 00:20:45.953 Error Log Page Entries Supported: 128 00:20:45.953 Keep Alive: Not Supported 00:20:45.953 00:20:45.953 NVM Command Set Attributes 00:20:45.953 ========================== 00:20:45.953 Submission Queue Entry Size 00:20:45.953 Max: 1 00:20:45.953 Min: 1 00:20:45.953 Completion Queue Entry Size 00:20:45.953 Max: 1 00:20:45.953 Min: 1 00:20:45.953 Number of Namespaces: 0 00:20:45.953 Compare Command: Not Supported 00:20:45.953 Write Uncorrectable Command: Not Supported 00:20:45.953 Dataset Management Command: Not Supported 00:20:45.953 Write Zeroes Command: Not Supported 00:20:45.953 Set Features Save Field: Not Supported 00:20:45.953 Reservations: Not Supported 00:20:45.953 Timestamp: Not Supported 00:20:45.953 Copy: Not Supported 00:20:45.953 Volatile Write Cache: Not Present 00:20:45.953 Atomic Write Unit (Normal): 1 00:20:45.953 Atomic Write Unit (PFail): 1 00:20:45.953 Atomic Compare & Write Unit: 1 00:20:45.953 Fused Compare & Write: Supported 00:20:45.953 Scatter-Gather List 00:20:45.953 SGL Command Set: Supported 00:20:45.953 SGL Keyed: Supported 00:20:45.953 SGL Bit Bucket Descriptor: Not Supported 00:20:45.953 SGL Metadata Pointer: Not Supported 00:20:45.953 Oversized SGL: Not Supported 00:20:45.953 SGL Metadata Address: Not Supported 00:20:45.953 SGL Offset: Supported 00:20:45.953 Transport SGL Data Block: Not Supported 00:20:45.953 Replay Protected Memory Block: Not Supported 00:20:45.953 00:20:45.953 Firmware Slot Information 00:20:45.953 ========================= 00:20:45.953 Active slot: 0 00:20:45.953 00:20:45.953 00:20:45.953 Error Log 00:20:45.953 ========= 00:20:45.953 00:20:45.953 Active Namespaces 00:20:45.953 ================= 00:20:45.953 Discovery Log Page 00:20:45.953 ================== 00:20:45.953 Generation Counter: 2 00:20:45.953 Number of Records: 2 00:20:45.953 Record Format: 0 00:20:45.953 00:20:45.953 Discovery Log Entry 0 00:20:45.953 ---------------------- 00:20:45.953 Transport Type: 3 (TCP) 00:20:45.953 Address Family: 1 (IPv4) 00:20:45.953 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:45.953 Entry Flags: 00:20:45.953 Duplicate Returned Information: 1 00:20:45.953 Explicit Persistent Connection Support for Discovery: 1 00:20:45.953 Transport Requirements: 00:20:45.953 Secure Channel: Not Required 00:20:45.953 Port ID: 0 (0x0000) 00:20:45.953 Controller ID: 65535 (0xffff) 00:20:45.953 Admin Max SQ Size: 128 00:20:45.953 Transport Service Identifier: 4420 00:20:45.953 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:45.953 Transport Address: 10.0.0.2 00:20:45.953 Discovery Log Entry 1 00:20:45.953 ---------------------- 00:20:45.953 Transport Type: 3 (TCP) 00:20:45.953 Address Family: 1 (IPv4) 00:20:45.953 Subsystem Type: 2 (NVM Subsystem) 00:20:45.953 Entry Flags: 00:20:45.953 Duplicate Returned Information: 0 00:20:45.953 Explicit Persistent Connection Support for Discovery: 0 00:20:45.953 Transport Requirements: 00:20:45.953 Secure Channel: Not Required 00:20:45.953 Port ID: 0 (0x0000) 00:20:45.953 Controller ID: 65535 (0xffff) 00:20:45.953 Admin Max SQ Size: 128 00:20:45.953 Transport Service Identifier: 4420 00:20:45.953 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:45.953 Transport Address: 10.0.0.2 [2024-07-15 23:24:01.065025] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:45.953 [2024-07-15 23:24:01.065068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae540) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.953 [2024-07-15 23:24:01.065089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae6c0) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.953 [2024-07-15 23:24:01.065104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae840) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.953 [2024-07-15 23:24:01.065119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.953 [2024-07-15 23:24:01.065143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.953 [2024-07-15 23:24:01.065169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.953 [2024-07-15 23:24:01.065194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.953 [2024-07-15 23:24:01.065301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.953 [2024-07-15 23:24:01.065316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.953 [2024-07-15 23:24:01.065322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.953 [2024-07-15 23:24:01.065364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.953 [2024-07-15 23:24:01.065390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.953 [2024-07-15 23:24:01.065519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.953 [2024-07-15 23:24:01.065533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.953 [2024-07-15 23:24:01.065540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065554] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:45.953 [2024-07-15 23:24:01.065562] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:45.953 [2024-07-15 23:24:01.065578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.953 [2024-07-15 23:24:01.065603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.953 [2024-07-15 23:24:01.065623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.953 [2024-07-15 23:24:01.065751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.953 [2024-07-15 23:24:01.065767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.953 [2024-07-15 23:24:01.065773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.953 [2024-07-15 23:24:01.065798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.953 [2024-07-15 23:24:01.065813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.953 [2024-07-15 23:24:01.065823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.953 [2024-07-15 23:24:01.065844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.953 [2024-07-15 23:24:01.065943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.953 [2024-07-15 23:24:01.065955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.065961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.065968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.065984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.065993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.065999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.066052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.066151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.066165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.066171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.066194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.066242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.066340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.066354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.066361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.066383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.066428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.066528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.066542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.066549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.066571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.066615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.066730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.066751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.066759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.066787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.066834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.066932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.066944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.066951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.066973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.066988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.066998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.067048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.067151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.067165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.067171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.067194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.067218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.067238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.067332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.067346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.067353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.067375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.067399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.067418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.067518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.067530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.067536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.067558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.067583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.067603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.067700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.067711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.067732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.067765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.067801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.067823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.067920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.067933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.067940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.067963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.067979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.067989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.068010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.068152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.068164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.068170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.068193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.954 [2024-07-15 23:24:01.068217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.954 [2024-07-15 23:24:01.068237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.954 [2024-07-15 23:24:01.068336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.954 [2024-07-15 23:24:01.068350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.954 [2024-07-15 23:24:01.068357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.954 [2024-07-15 23:24:01.068379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.954 [2024-07-15 23:24:01.068393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.955 [2024-07-15 23:24:01.068403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.068423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.955 [2024-07-15 23:24:01.068519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.068531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.068537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.068543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.955 [2024-07-15 23:24:01.068559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.068567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.068574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.955 [2024-07-15 23:24:01.068584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.068603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.955 [2024-07-15 23:24:01.068696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.068713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.068720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.068726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.955 [2024-07-15 23:24:01.072765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.072780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.072786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c4e6e0) 00:20:45.955 [2024-07-15 23:24:01.072797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.072819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cae9c0, cid 3, qid 0 00:20:45.955 [2024-07-15 23:24:01.072929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.072941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.072948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.072954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cae9c0) on tqpair=0x1c4e6e0 00:20:45.955 [2024-07-15 23:24:01.072967] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:45.955 00:20:45.955 23:24:01 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:45.955 [2024-07-15 23:24:01.108046] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:45.955 [2024-07-15 23:24:01.108100] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393450 ] 00:20:45.955 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.955 [2024-07-15 23:24:01.142586] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:45.955 [2024-07-15 23:24:01.142638] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:45.955 [2024-07-15 23:24:01.142648] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:45.955 [2024-07-15 23:24:01.142662] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:45.955 [2024-07-15 23:24:01.142670] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:45.955 [2024-07-15 23:24:01.143193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:45.955 [2024-07-15 23:24:01.143229] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16e96e0 0 00:20:45.955 [2024-07-15 23:24:01.149754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:45.955 [2024-07-15 23:24:01.149779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:45.955 [2024-07-15 23:24:01.149786] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:45.955 [2024-07-15 23:24:01.149793] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:45.955 [2024-07-15 23:24:01.149824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.149836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.149843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.149857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:45.955 [2024-07-15 23:24:01.149888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.157754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.157771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.157778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.157785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.955 [2024-07-15 23:24:01.157799] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:45.955 [2024-07-15 23:24:01.157809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:45.955 [2024-07-15 23:24:01.157819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:45.955 [2024-07-15 23:24:01.157836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.157845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.157851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.157862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.157887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.158041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.158053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.158060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.955 [2024-07-15 23:24:01.158077] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:45.955 [2024-07-15 23:24:01.158091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:45.955 [2024-07-15 23:24:01.158103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.158126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.158147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.158243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.158254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.158260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.955 [2024-07-15 23:24:01.158274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:45.955 [2024-07-15 23:24:01.158287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:45.955 [2024-07-15 23:24:01.158298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.158320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.158344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.158443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.158457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.158463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.955 [2024-07-15 23:24:01.158477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:45.955 [2024-07-15 23:24:01.158493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.158518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.158538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.158628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.158639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.158645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.955 [2024-07-15 23:24:01.158658] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:45.955 [2024-07-15 23:24:01.158666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:45.955 [2024-07-15 23:24:01.158678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:45.955 [2024-07-15 23:24:01.158788] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:45.955 [2024-07-15 23:24:01.158797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:45.955 [2024-07-15 23:24:01.158809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.955 [2024-07-15 23:24:01.158823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.955 [2024-07-15 23:24:01.158833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.955 [2024-07-15 23:24:01.158854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.955 [2024-07-15 23:24:01.158986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.955 [2024-07-15 23:24:01.159000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.955 [2024-07-15 23:24:01.159006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.956 [2024-07-15 23:24:01.159021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:45.956 [2024-07-15 23:24:01.159051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.159076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.956 [2024-07-15 23:24:01.159101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.956 [2024-07-15 23:24:01.159200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.956 [2024-07-15 23:24:01.159212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.956 [2024-07-15 23:24:01.159218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.956 [2024-07-15 23:24:01.159231] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:45.956 [2024-07-15 23:24:01.159239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.159252] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:45.956 [2024-07-15 23:24:01.159265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.159278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.159296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.956 [2024-07-15 23:24:01.159316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.956 [2024-07-15 23:24:01.159464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.956 [2024-07-15 23:24:01.159475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.956 [2024-07-15 23:24:01.159481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159487] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=4096, cccid=0 00:20:45.956 [2024-07-15 23:24:01.159495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749540) on tqpair(0x16e96e0): expected_datao=0, payload_size=4096 00:20:45.956 [2024-07-15 23:24:01.159502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159531] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.956 [2024-07-15 23:24:01.159603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.956 [2024-07-15 23:24:01.159609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.956 [2024-07-15 23:24:01.159625] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:45.956 [2024-07-15 23:24:01.159633] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:45.956 [2024-07-15 23:24:01.159640] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:45.956 [2024-07-15 23:24:01.159646] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:45.956 [2024-07-15 23:24:01.159654] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:45.956 [2024-07-15 23:24:01.159661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.159675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.159692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.159717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:45.956 [2024-07-15 23:24:01.159764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.956 [2024-07-15 23:24:01.159887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.956 [2024-07-15 23:24:01.159909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.956 [2024-07-15 23:24:01.159916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.956 [2024-07-15 23:24:01.159933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.159956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.956 [2024-07-15 23:24:01.159966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.159979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.159987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.956 [2024-07-15 23:24:01.159997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.160018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.956 [2024-07-15 23:24:01.160043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.160063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.956 [2024-07-15 23:24:01.160072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.160105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.160117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.160134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.956 [2024-07-15 23:24:01.160155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749540, cid 0, qid 0 00:20:45.956 [2024-07-15 23:24:01.160165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17496c0, cid 1, qid 0 00:20:45.956 [2024-07-15 23:24:01.160173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749840, cid 2, qid 0 00:20:45.956 [2024-07-15 23:24:01.160180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17499c0, cid 3, qid 0 00:20:45.956 [2024-07-15 23:24:01.160190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.956 [2024-07-15 23:24:01.160430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.956 [2024-07-15 23:24:01.160441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.956 [2024-07-15 23:24:01.160447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.956 [2024-07-15 23:24:01.160460] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:45.956 [2024-07-15 23:24:01.160468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.160485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.160497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:45.956 [2024-07-15 23:24:01.160507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.956 [2024-07-15 23:24:01.160519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.956 [2024-07-15 23:24:01.160529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:45.956 [2024-07-15 23:24:01.160549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.957 [2024-07-15 23:24:01.160772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.160787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.160794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.160800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.160867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.160887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.160902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.160909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.160924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.160946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.957 [2024-07-15 23:24:01.161151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.957 [2024-07-15 23:24:01.161162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.957 [2024-07-15 23:24:01.161168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.161174] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=4096, cccid=4 00:20:45.957 [2024-07-15 23:24:01.161181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749b40) on tqpair(0x16e96e0): expected_datao=0, payload_size=4096 00:20:45.957 [2024-07-15 23:24:01.161188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.161204] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.161212] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.203755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.203775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.203786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.203793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.203811] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:45.957 [2024-07-15 23:24:01.203832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.203850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.203864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.203872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.203883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.203906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.957 [2024-07-15 23:24:01.204123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.957 [2024-07-15 23:24:01.204135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.957 [2024-07-15 23:24:01.204141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204147] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=4096, cccid=4 00:20:45.957 [2024-07-15 23:24:01.204154] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749b40) on tqpair(0x16e96e0): expected_datao=0, payload_size=4096 00:20:45.957 [2024-07-15 23:24:01.204161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204171] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.204212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.204219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.204248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.204267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.204280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.204298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.204318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.957 [2024-07-15 23:24:01.204441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.957 [2024-07-15 23:24:01.204452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.957 [2024-07-15 23:24:01.204458] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204464] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=4096, cccid=4 00:20:45.957 [2024-07-15 23:24:01.204471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749b40) on tqpair(0x16e96e0): expected_datao=0, payload_size=4096 00:20:45.957 [2024-07-15 23:24:01.204478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.204506] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.244884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.244903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.244911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.244918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.244932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.244948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.244963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.244977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.244986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.244995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.245004] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:45.957 [2024-07-15 23:24:01.245011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:45.957 [2024-07-15 23:24:01.245020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:45.957 [2024-07-15 23:24:01.245041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.245076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.245087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.245109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.957 [2024-07-15 23:24:01.245136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.957 [2024-07-15 23:24:01.245147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749cc0, cid 5, qid 0 00:20:45.957 [2024-07-15 23:24:01.245264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.245278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.245284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.245300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.245309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.245315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749cc0) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.245337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.245360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.245382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749cc0, cid 5, qid 0 00:20:45.957 [2024-07-15 23:24:01.245503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.245515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.245521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749cc0) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.245542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.245561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.245581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749cc0, cid 5, qid 0 00:20:45.957 [2024-07-15 23:24:01.245691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.957 [2024-07-15 23:24:01.245703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.957 [2024-07-15 23:24:01.245709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749cc0) on tqpair=0x16e96e0 00:20:45.957 [2024-07-15 23:24:01.245758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.957 [2024-07-15 23:24:01.245769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e96e0) 00:20:45.957 [2024-07-15 23:24:01.245780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.957 [2024-07-15 23:24:01.245801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749cc0, cid 5, qid 0 00:20:45.958 [2024-07-15 23:24:01.245907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.958 [2024-07-15 23:24:01.245919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.958 [2024-07-15 23:24:01.245926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.245932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749cc0) on tqpair=0x16e96e0 00:20:45.958 [2024-07-15 23:24:01.245956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.245967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16e96e0) 00:20:45.958 [2024-07-15 23:24:01.245978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.958 [2024-07-15 23:24:01.245990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.245997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16e96e0) 00:20:45.958 [2024-07-15 23:24:01.246007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.958 [2024-07-15 23:24:01.246019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16e96e0) 00:20:45.958 [2024-07-15 23:24:01.246035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.958 [2024-07-15 23:24:01.246047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16e96e0) 00:20:45.958 [2024-07-15 23:24:01.246067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.958 [2024-07-15 23:24:01.246106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749cc0, cid 5, qid 0 00:20:45.958 [2024-07-15 23:24:01.246117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749b40, cid 4, qid 0 00:20:45.958 [2024-07-15 23:24:01.246124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749e40, cid 6, qid 0 00:20:45.958 [2024-07-15 23:24:01.246132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749fc0, cid 7, qid 0 00:20:45.958 [2024-07-15 23:24:01.246398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.958 [2024-07-15 23:24:01.246410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.958 [2024-07-15 23:24:01.246416] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246422] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=8192, cccid=5 00:20:45.958 [2024-07-15 23:24:01.246430] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749cc0) on tqpair(0x16e96e0): expected_datao=0, payload_size=8192 00:20:45.958 [2024-07-15 23:24:01.246437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246458] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246467] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.958 [2024-07-15 23:24:01.246484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.958 [2024-07-15 23:24:01.246490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=512, cccid=4 00:20:45.958 [2024-07-15 23:24:01.246503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749b40) on tqpair(0x16e96e0): expected_datao=0, payload_size=512 00:20:45.958 [2024-07-15 23:24:01.246510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246519] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246525] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.958 [2024-07-15 23:24:01.246542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.958 [2024-07-15 23:24:01.246548] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246553] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=512, cccid=6 00:20:45.958 [2024-07-15 23:24:01.246560] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749e40) on tqpair(0x16e96e0): expected_datao=0, payload_size=512 00:20:45.958 [2024-07-15 23:24:01.246568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246576] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246583] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.958 [2024-07-15 23:24:01.246599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.958 [2024-07-15 23:24:01.246605] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246611] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16e96e0): datao=0, datal=4096, cccid=7 00:20:45.958 [2024-07-15 23:24:01.246618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1749fc0) on tqpair(0x16e96e0): expected_datao=0, payload_size=4096 00:20:45.958 [2024-07-15 23:24:01.246625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246635] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246645] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.958 [2024-07-15 23:24:01.246665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.958 [2024-07-15 23:24:01.246671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749cc0) on tqpair=0x16e96e0 00:20:45.958 [2024-07-15 23:24:01.246695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.958 [2024-07-15 23:24:01.246705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.958 [2024-07-15 23:24:01.246711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749b40) on tqpair=0x16e96e0 00:20:45.958 [2024-07-15 23:24:01.246731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.958 [2024-07-15 23:24:01.246766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.958 [2024-07-15 23:24:01.246774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749e40) on tqpair=0x16e96e0 00:20:45.958 [2024-07-15 23:24:01.246791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.958 [2024-07-15 23:24:01.246800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.958 [2024-07-15 23:24:01.246807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.958 [2024-07-15 23:24:01.246813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749fc0) on tqpair=0x16e96e0 00:20:45.958 ===================================================== 00:20:45.958 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.958 ===================================================== 00:20:45.958 Controller Capabilities/Features 00:20:45.958 ================================ 00:20:45.958 Vendor ID: 8086 00:20:45.958 Subsystem Vendor ID: 8086 00:20:45.958 Serial Number: SPDK00000000000001 00:20:45.958 Model Number: SPDK bdev Controller 00:20:45.958 Firmware Version: 24.09 00:20:45.958 Recommended Arb Burst: 6 00:20:45.958 IEEE OUI Identifier: e4 d2 5c 00:20:45.958 Multi-path I/O 00:20:45.958 May have multiple subsystem ports: Yes 00:20:45.958 May have multiple controllers: Yes 00:20:45.958 Associated with SR-IOV VF: No 00:20:45.958 Max Data Transfer Size: 131072 00:20:45.958 Max Number of Namespaces: 32 00:20:45.958 Max Number of I/O Queues: 127 00:20:45.958 NVMe Specification Version (VS): 1.3 00:20:45.958 NVMe Specification Version (Identify): 1.3 00:20:45.958 Maximum Queue Entries: 128 00:20:45.958 Contiguous Queues Required: Yes 00:20:45.958 Arbitration Mechanisms Supported 00:20:45.958 Weighted Round Robin: Not Supported 00:20:45.958 Vendor Specific: Not Supported 00:20:45.958 Reset Timeout: 15000 ms 00:20:45.958 Doorbell Stride: 4 bytes 00:20:45.958 NVM Subsystem Reset: Not Supported 00:20:45.958 Command Sets Supported 00:20:45.958 NVM Command Set: Supported 00:20:45.958 Boot Partition: Not Supported 00:20:45.958 Memory Page Size Minimum: 4096 bytes 00:20:45.958 Memory Page Size Maximum: 4096 bytes 00:20:45.958 Persistent Memory Region: Not Supported 00:20:45.958 Optional Asynchronous Events Supported 00:20:45.958 Namespace Attribute Notices: Supported 00:20:45.958 Firmware Activation Notices: Not Supported 00:20:45.958 ANA Change Notices: Not Supported 00:20:45.958 PLE Aggregate Log Change Notices: Not Supported 00:20:45.958 LBA Status Info Alert Notices: Not Supported 00:20:45.958 EGE Aggregate Log Change Notices: Not Supported 00:20:45.958 Normal NVM Subsystem Shutdown event: Not Supported 00:20:45.958 Zone Descriptor Change Notices: Not Supported 00:20:45.958 Discovery Log Change Notices: Not Supported 00:20:45.958 Controller Attributes 00:20:45.958 128-bit Host Identifier: Supported 00:20:45.958 Non-Operational Permissive Mode: Not Supported 00:20:45.958 NVM Sets: Not Supported 00:20:45.958 Read Recovery Levels: Not Supported 00:20:45.958 Endurance Groups: Not Supported 00:20:45.958 Predictable Latency Mode: Not Supported 00:20:45.958 Traffic Based Keep ALive: Not Supported 00:20:45.958 Namespace Granularity: Not Supported 00:20:45.958 SQ Associations: Not Supported 00:20:45.958 UUID List: Not Supported 00:20:45.958 Multi-Domain Subsystem: Not Supported 00:20:45.958 Fixed Capacity Management: Not Supported 00:20:45.958 Variable Capacity Management: Not Supported 00:20:45.958 Delete Endurance Group: Not Supported 00:20:45.958 Delete NVM Set: Not Supported 00:20:45.958 Extended LBA Formats Supported: Not Supported 00:20:45.958 Flexible Data Placement Supported: Not Supported 00:20:45.958 00:20:45.958 Controller Memory Buffer Support 00:20:45.958 ================================ 00:20:45.958 Supported: No 00:20:45.958 00:20:45.958 Persistent Memory Region Support 00:20:45.958 ================================ 00:20:45.958 Supported: No 00:20:45.958 00:20:45.958 Admin Command Set Attributes 00:20:45.958 ============================ 00:20:45.958 Security Send/Receive: Not Supported 00:20:45.958 Format NVM: Not Supported 00:20:45.958 Firmware Activate/Download: Not Supported 00:20:45.958 Namespace Management: Not Supported 00:20:45.958 Device Self-Test: Not Supported 00:20:45.958 Directives: Not Supported 00:20:45.958 NVMe-MI: Not Supported 00:20:45.959 Virtualization Management: Not Supported 00:20:45.959 Doorbell Buffer Config: Not Supported 00:20:45.959 Get LBA Status Capability: Not Supported 00:20:45.959 Command & Feature Lockdown Capability: Not Supported 00:20:45.959 Abort Command Limit: 4 00:20:45.959 Async Event Request Limit: 4 00:20:45.959 Number of Firmware Slots: N/A 00:20:45.959 Firmware Slot 1 Read-Only: N/A 00:20:45.959 Firmware Activation Without Reset: N/A 00:20:45.959 Multiple Update Detection Support: N/A 00:20:45.959 Firmware Update Granularity: No Information Provided 00:20:45.959 Per-Namespace SMART Log: No 00:20:45.959 Asymmetric Namespace Access Log Page: Not Supported 00:20:45.959 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:45.959 Command Effects Log Page: Supported 00:20:45.959 Get Log Page Extended Data: Supported 00:20:45.959 Telemetry Log Pages: Not Supported 00:20:45.959 Persistent Event Log Pages: Not Supported 00:20:45.959 Supported Log Pages Log Page: May Support 00:20:45.959 Commands Supported & Effects Log Page: Not Supported 00:20:45.959 Feature Identifiers & Effects Log Page:May Support 00:20:45.959 NVMe-MI Commands & Effects Log Page: May Support 00:20:45.959 Data Area 4 for Telemetry Log: Not Supported 00:20:45.959 Error Log Page Entries Supported: 128 00:20:45.959 Keep Alive: Supported 00:20:45.959 Keep Alive Granularity: 10000 ms 00:20:45.959 00:20:45.959 NVM Command Set Attributes 00:20:45.959 ========================== 00:20:45.959 Submission Queue Entry Size 00:20:45.959 Max: 64 00:20:45.959 Min: 64 00:20:45.959 Completion Queue Entry Size 00:20:45.959 Max: 16 00:20:45.959 Min: 16 00:20:45.959 Number of Namespaces: 32 00:20:45.959 Compare Command: Supported 00:20:45.959 Write Uncorrectable Command: Not Supported 00:20:45.959 Dataset Management Command: Supported 00:20:45.959 Write Zeroes Command: Supported 00:20:45.959 Set Features Save Field: Not Supported 00:20:45.959 Reservations: Supported 00:20:45.959 Timestamp: Not Supported 00:20:45.959 Copy: Supported 00:20:45.959 Volatile Write Cache: Present 00:20:45.959 Atomic Write Unit (Normal): 1 00:20:45.959 Atomic Write Unit (PFail): 1 00:20:45.959 Atomic Compare & Write Unit: 1 00:20:45.959 Fused Compare & Write: Supported 00:20:45.959 Scatter-Gather List 00:20:45.959 SGL Command Set: Supported 00:20:45.959 SGL Keyed: Supported 00:20:45.959 SGL Bit Bucket Descriptor: Not Supported 00:20:45.959 SGL Metadata Pointer: Not Supported 00:20:45.959 Oversized SGL: Not Supported 00:20:45.959 SGL Metadata Address: Not Supported 00:20:45.959 SGL Offset: Supported 00:20:45.959 Transport SGL Data Block: Not Supported 00:20:45.959 Replay Protected Memory Block: Not Supported 00:20:45.959 00:20:45.959 Firmware Slot Information 00:20:45.959 ========================= 00:20:45.959 Active slot: 1 00:20:45.959 Slot 1 Firmware Revision: 24.09 00:20:45.959 00:20:45.959 00:20:45.959 Commands Supported and Effects 00:20:45.959 ============================== 00:20:45.959 Admin Commands 00:20:45.959 -------------- 00:20:45.959 Get Log Page (02h): Supported 00:20:45.959 Identify (06h): Supported 00:20:45.959 Abort (08h): Supported 00:20:45.959 Set Features (09h): Supported 00:20:45.959 Get Features (0Ah): Supported 00:20:45.959 Asynchronous Event Request (0Ch): Supported 00:20:45.959 Keep Alive (18h): Supported 00:20:45.959 I/O Commands 00:20:45.959 ------------ 00:20:45.959 Flush (00h): Supported LBA-Change 00:20:45.959 Write (01h): Supported LBA-Change 00:20:45.959 Read (02h): Supported 00:20:45.959 Compare (05h): Supported 00:20:45.959 Write Zeroes (08h): Supported LBA-Change 00:20:45.959 Dataset Management (09h): Supported LBA-Change 00:20:45.959 Copy (19h): Supported LBA-Change 00:20:45.959 00:20:45.959 Error Log 00:20:45.959 ========= 00:20:45.959 00:20:45.959 Arbitration 00:20:45.959 =========== 00:20:45.959 Arbitration Burst: 1 00:20:45.959 00:20:45.959 Power Management 00:20:45.959 ================ 00:20:45.959 Number of Power States: 1 00:20:45.959 Current Power State: Power State #0 00:20:45.959 Power State #0: 00:20:45.959 Max Power: 0.00 W 00:20:45.959 Non-Operational State: Operational 00:20:45.959 Entry Latency: Not Reported 00:20:45.959 Exit Latency: Not Reported 00:20:45.959 Relative Read Throughput: 0 00:20:45.959 Relative Read Latency: 0 00:20:45.959 Relative Write Throughput: 0 00:20:45.959 Relative Write Latency: 0 00:20:45.959 Idle Power: Not Reported 00:20:45.959 Active Power: Not Reported 00:20:45.959 Non-Operational Permissive Mode: Not Supported 00:20:45.959 00:20:45.959 Health Information 00:20:45.959 ================== 00:20:45.959 Critical Warnings: 00:20:45.959 Available Spare Space: OK 00:20:45.959 Temperature: OK 00:20:45.959 Device Reliability: OK 00:20:45.959 Read Only: No 00:20:45.959 Volatile Memory Backup: OK 00:20:45.959 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:45.959 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:45.959 Available Spare: 0% 00:20:45.959 Available Spare Threshold: 0% 00:20:45.959 Life Percentage Used:[2024-07-15 23:24:01.246929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.246941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16e96e0) 00:20:45.959 [2024-07-15 23:24:01.246952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.959 [2024-07-15 23:24:01.246974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1749fc0, cid 7, qid 0 00:20:45.959 [2024-07-15 23:24:01.247156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.959 [2024-07-15 23:24:01.247168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.959 [2024-07-15 23:24:01.247174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749fc0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247224] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:45.959 [2024-07-15 23:24:01.247243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749540) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.959 [2024-07-15 23:24:01.247262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17496c0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.959 [2024-07-15 23:24:01.247277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1749840) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.959 [2024-07-15 23:24:01.247292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17499c0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.959 [2024-07-15 23:24:01.247313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e96e0) 00:20:45.959 [2024-07-15 23:24:01.247341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.959 [2024-07-15 23:24:01.247363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17499c0, cid 3, qid 0 00:20:45.959 [2024-07-15 23:24:01.247542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.959 [2024-07-15 23:24:01.247556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.959 [2024-07-15 23:24:01.247562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17499c0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.247579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.247593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e96e0) 00:20:45.959 [2024-07-15 23:24:01.247603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.959 [2024-07-15 23:24:01.247628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17499c0, cid 3, qid 0 00:20:45.959 [2024-07-15 23:24:01.251752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.959 [2024-07-15 23:24:01.251769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.959 [2024-07-15 23:24:01.251786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.251793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17499c0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.251801] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:45.959 [2024-07-15 23:24:01.251808] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:45.959 [2024-07-15 23:24:01.251826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.251835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.251842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16e96e0) 00:20:45.959 [2024-07-15 23:24:01.251853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.959 [2024-07-15 23:24:01.251875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17499c0, cid 3, qid 0 00:20:45.959 [2024-07-15 23:24:01.252014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.959 [2024-07-15 23:24:01.252044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.959 [2024-07-15 23:24:01.252050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.959 [2024-07-15 23:24:01.252057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17499c0) on tqpair=0x16e96e0 00:20:45.959 [2024-07-15 23:24:01.252072] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:46.218 0% 00:20:46.218 Data Units Read: 0 00:20:46.218 Data Units Written: 0 00:20:46.218 Host Read Commands: 0 00:20:46.218 Host Write Commands: 0 00:20:46.218 Controller Busy Time: 0 minutes 00:20:46.218 Power Cycles: 0 00:20:46.218 Power On Hours: 0 hours 00:20:46.218 Unsafe Shutdowns: 0 00:20:46.218 Unrecoverable Media Errors: 0 00:20:46.218 Lifetime Error Log Entries: 0 00:20:46.218 Warning Temperature Time: 0 minutes 00:20:46.218 Critical Temperature Time: 0 minutes 00:20:46.218 00:20:46.218 Number of Queues 00:20:46.218 ================ 00:20:46.218 Number of I/O Submission Queues: 127 00:20:46.218 Number of I/O Completion Queues: 127 00:20:46.218 00:20:46.218 Active Namespaces 00:20:46.218 ================= 00:20:46.218 Namespace ID:1 00:20:46.218 Error Recovery Timeout: Unlimited 00:20:46.218 Command Set Identifier: NVM (00h) 00:20:46.218 Deallocate: Supported 00:20:46.218 Deallocated/Unwritten Error: Not Supported 00:20:46.218 Deallocated Read Value: Unknown 00:20:46.218 Deallocate in Write Zeroes: Not Supported 00:20:46.218 Deallocated Guard Field: 0xFFFF 00:20:46.218 Flush: Supported 00:20:46.218 Reservation: Supported 00:20:46.218 Namespace Sharing Capabilities: Multiple Controllers 00:20:46.218 Size (in LBAs): 131072 (0GiB) 00:20:46.218 Capacity (in LBAs): 131072 (0GiB) 00:20:46.218 Utilization (in LBAs): 131072 (0GiB) 00:20:46.218 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:46.218 EUI64: ABCDEF0123456789 00:20:46.218 UUID: dffdb32b-0eae-4f2b-9b50-803b37bc7e45 00:20:46.218 Thin Provisioning: Not Supported 00:20:46.218 Per-NS Atomic Units: Yes 00:20:46.218 Atomic Boundary Size (Normal): 0 00:20:46.218 Atomic Boundary Size (PFail): 0 00:20:46.218 Atomic Boundary Offset: 0 00:20:46.218 Maximum Single Source Range Length: 65535 00:20:46.218 Maximum Copy Length: 65535 00:20:46.218 Maximum Source Range Count: 1 00:20:46.218 NGUID/EUI64 Never Reused: No 00:20:46.218 Namespace Write Protected: No 00:20:46.218 Number of LBA Formats: 1 00:20:46.218 Current LBA Format: LBA Format #00 00:20:46.218 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:46.218 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.218 rmmod nvme_tcp 00:20:46.218 rmmod nvme_fabrics 00:20:46.218 rmmod nvme_keyring 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2393226 ']' 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2393226 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2393226 ']' 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2393226 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2393226 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2393226' 00:20:46.218 killing process with pid 2393226 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2393226 00:20:46.218 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2393226 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.477 23:24:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.378 23:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:48.378 00:20:48.378 real 0m5.951s 00:20:48.378 user 0m7.054s 00:20:48.378 sys 0m1.827s 00:20:48.378 23:24:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:48.378 23:24:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.378 ************************************ 00:20:48.378 END TEST nvmf_identify 00:20:48.378 ************************************ 00:20:48.636 23:24:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:48.636 23:24:03 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:48.636 23:24:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:48.636 23:24:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.636 23:24:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.636 ************************************ 00:20:48.636 START TEST nvmf_perf 00:20:48.636 ************************************ 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:48.636 * Looking for test storage... 00:20:48.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.636 23:24:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.637 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.637 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.637 23:24:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.637 23:24:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:50.538 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:50.538 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:50.538 Found net devices under 0000:84:00.0: cvl_0_0 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:50.538 Found net devices under 0000:84:00.1: cvl_0_1 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.538 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:20:50.797 00:20:50.797 --- 10.0.0.2 ping statistics --- 00:20:50.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.797 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:20:50.797 00:20:50.797 --- 10.0.0.1 ping statistics --- 00:20:50.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.797 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2395453 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2395453 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2395453 ']' 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.797 23:24:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:50.797 [2024-07-15 23:24:06.039400] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:20:50.797 [2024-07-15 23:24:06.039481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.797 [2024-07-15 23:24:06.109151] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.056 [2024-07-15 23:24:06.227228] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.056 [2024-07-15 23:24:06.227283] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.056 [2024-07-15 23:24:06.227313] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.056 [2024-07-15 23:24:06.227324] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.056 [2024-07-15 23:24:06.227335] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.056 [2024-07-15 23:24:06.227402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.056 [2024-07-15 23:24:06.227493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.056 [2024-07-15 23:24:06.227554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.056 [2024-07-15 23:24:06.227551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.989 23:24:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.989 23:24:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:51.989 23:24:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.989 23:24:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.989 23:24:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:51.989 23:24:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.989 23:24:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:51.989 23:24:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:55.269 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:55.269 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:55.269 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:20:55.269 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:55.527 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:55.527 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:20:55.527 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:55.527 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:55.527 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.527 [2024-07-15 23:24:10.835765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.784 23:24:10 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.042 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:56.042 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.300 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:56.300 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:56.300 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.558 [2024-07-15 23:24:11.827304] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.558 23:24:11 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:56.816 23:24:12 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:20:56.816 23:24:12 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:56.816 23:24:12 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:56.816 23:24:12 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:58.187 Initializing NVMe Controllers 00:20:58.187 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:20:58.187 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:20:58.187 Initialization complete. Launching workers. 00:20:58.187 ======================================================== 00:20:58.187 Latency(us) 00:20:58.187 Device Information : IOPS MiB/s Average min max 00:20:58.187 PCIE (0000:82:00.0) NSID 1 from core 0: 85939.51 335.70 371.76 10.53 4360.22 00:20:58.187 ======================================================== 00:20:58.187 Total : 85939.51 335.70 371.76 10.53 4360.22 00:20:58.187 00:20:58.187 23:24:13 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.187 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.556 Initializing NVMe Controllers 00:20:59.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:59.556 Initialization complete. Launching workers. 00:20:59.556 ======================================================== 00:20:59.556 Latency(us) 00:20:59.556 Device Information : IOPS MiB/s Average min max 00:20:59.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.58 0.36 11018.19 241.44 45005.63 00:20:59.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.67 0.28 14062.10 7002.18 47893.21 00:20:59.556 ======================================================== 00:20:59.556 Total : 163.25 0.64 12354.54 241.44 47893.21 00:20:59.556 00:20:59.556 23:24:14 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.556 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.489 Initializing NVMe Controllers 00:21:00.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.489 Initialization complete. Launching workers. 00:21:00.489 ======================================================== 00:21:00.489 Latency(us) 00:21:00.489 Device Information : IOPS MiB/s Average min max 00:21:00.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8282.86 32.35 3866.03 680.66 9262.57 00:21:00.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3810.94 14.89 8410.37 4221.72 15967.60 00:21:00.489 ======================================================== 00:21:00.489 Total : 12093.79 47.24 5298.02 680.66 15967.60 00:21:00.489 00:21:00.489 23:24:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:00.489 23:24:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:00.489 23:24:15 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.489 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.015 Initializing NVMe Controllers 00:21:03.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.015 Controller IO queue size 128, less than required. 00:21:03.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.015 Controller IO queue size 128, less than required. 00:21:03.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:03.015 Initialization complete. Launching workers. 00:21:03.015 ======================================================== 00:21:03.015 Latency(us) 00:21:03.015 Device Information : IOPS MiB/s Average min max 00:21:03.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1254.48 313.62 104539.83 71594.31 169066.85 00:21:03.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.55 150.64 221762.92 112760.43 333186.74 00:21:03.015 ======================================================== 00:21:03.015 Total : 1857.02 464.26 142575.15 71594.31 333186.74 00:21:03.016 00:21:03.271 23:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:03.271 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.271 No valid NVMe controllers or AIO or URING devices found 00:21:03.271 Initializing NVMe Controllers 00:21:03.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.271 Controller IO queue size 128, less than required. 00:21:03.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.271 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:03.271 Controller IO queue size 128, less than required. 00:21:03.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.271 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:03.271 WARNING: Some requested NVMe devices were skipped 00:21:03.271 23:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:03.271 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.794 Initializing NVMe Controllers 00:21:05.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.794 Controller IO queue size 128, less than required. 00:21:05.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.794 Controller IO queue size 128, less than required. 00:21:05.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:05.794 Initialization complete. Launching workers. 00:21:05.794 00:21:05.794 ==================== 00:21:05.794 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:05.794 TCP transport: 00:21:05.794 polls: 15168 00:21:05.794 idle_polls: 6235 00:21:05.794 sock_completions: 8933 00:21:05.794 nvme_completions: 4831 00:21:05.794 submitted_requests: 7228 00:21:05.794 queued_requests: 1 00:21:05.794 00:21:05.794 ==================== 00:21:05.794 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:05.794 TCP transport: 00:21:05.794 polls: 16775 00:21:05.794 idle_polls: 6984 00:21:05.794 sock_completions: 9791 00:21:05.794 nvme_completions: 4477 00:21:05.794 submitted_requests: 6738 00:21:05.794 queued_requests: 1 00:21:05.794 ======================================================== 00:21:05.794 Latency(us) 00:21:05.794 Device Information : IOPS MiB/s Average min max 00:21:05.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1207.50 301.87 108976.22 49190.21 178573.08 00:21:05.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1119.00 279.75 118010.68 61262.15 201937.00 00:21:05.794 ======================================================== 00:21:05.794 Total : 2326.50 581.62 113321.61 49190.21 201937.00 00:21:05.794 00:21:05.794 23:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:05.794 23:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.051 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.051 rmmod nvme_tcp 00:21:06.051 rmmod nvme_fabrics 00:21:06.051 rmmod nvme_keyring 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2395453 ']' 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2395453 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2395453 ']' 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2395453 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2395453 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2395453' 00:21:06.309 killing process with pid 2395453 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2395453 00:21:06.309 23:24:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2395453 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.219 23:24:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.179 23:24:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.179 00:21:10.179 real 0m21.378s 00:21:10.179 user 1m6.078s 00:21:10.179 sys 0m5.559s 00:21:10.179 23:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.179 23:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:10.179 ************************************ 00:21:10.179 END TEST nvmf_perf 00:21:10.179 ************************************ 00:21:10.179 23:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:10.179 23:24:25 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:10.179 23:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.179 23:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.179 23:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.179 ************************************ 00:21:10.179 START TEST nvmf_fio_host 00:21:10.179 ************************************ 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:10.179 * Looking for test storage... 00:21:10.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.179 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.180 23:24:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:12.084 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:12.084 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:12.084 Found net devices under 0000:84:00.0: cvl_0_0 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:12.084 Found net devices under 0000:84:00.1: cvl_0_1 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:21:12.084 00:21:12.084 --- 10.0.0.2 ping statistics --- 00:21:12.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.084 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:21:12.084 00:21:12.084 --- 10.0.0.1 ping statistics --- 00:21:12.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.084 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.084 23:24:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2399944 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2399944 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2399944 ']' 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.343 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.343 [2024-07-15 23:24:27.451547] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:21:12.343 [2024-07-15 23:24:27.451614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.343 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.343 [2024-07-15 23:24:27.523111] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.343 [2024-07-15 23:24:27.651642] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.343 [2024-07-15 23:24:27.651707] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.343 [2024-07-15 23:24:27.651724] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.343 [2024-07-15 23:24:27.651746] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.343 [2024-07-15 23:24:27.651760] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.343 [2024-07-15 23:24:27.651831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.343 [2024-07-15 23:24:27.651862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.343 [2024-07-15 23:24:27.655775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.343 [2024-07-15 23:24:27.656754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.601 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.601 23:24:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:12.601 23:24:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:12.859 [2024-07-15 23:24:28.062538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.859 23:24:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:12.859 23:24:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.859 23:24:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.859 23:24:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:13.117 Malloc1 00:21:13.117 23:24:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.375 23:24:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:13.942 23:24:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.942 [2024-07-15 23:24:29.190339] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.942 23:24:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:14.199 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:14.200 23:24:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:14.458 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:14.458 fio-3.35 00:21:14.458 Starting 1 thread 00:21:14.458 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.986 00:21:16.986 test: (groupid=0, jobs=1): err= 0: pid=2400301: Mon Jul 15 23:24:32 2024 00:21:16.986 read: IOPS=8966, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:21:16.986 slat (usec): min=2, max=172, avg= 3.14, stdev= 2.48 00:21:16.986 clat (usec): min=2542, max=13172, avg=7826.68, stdev=597.55 00:21:16.986 lat (usec): min=2568, max=13175, avg=7829.81, stdev=597.45 00:21:16.986 clat percentiles (usec): 00:21:16.986 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:21:16.986 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:21:16.986 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:16.986 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11731], 99.95th=[12387], 00:21:16.986 | 99.99th=[13173] 00:21:16.986 bw ( KiB/s): min=34704, max=36536, per=99.98%, avg=35858.00, stdev=804.50, samples=4 00:21:16.986 iops : min= 8676, max= 9134, avg=8964.50, stdev=201.13, samples=4 00:21:16.986 write: IOPS=8987, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec); 0 zone resets 00:21:16.986 slat (usec): min=2, max=130, avg= 3.39, stdev= 2.28 00:21:16.986 clat (usec): min=1446, max=12223, avg=6353.16, stdev=527.96 00:21:16.986 lat (usec): min=1454, max=12226, avg=6356.55, stdev=527.89 00:21:16.986 clat percentiles (usec): 00:21:16.986 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5997], 00:21:16.986 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:21:16.986 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:21:16.986 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10945], 99.95th=[11731], 00:21:16.986 | 99.99th=[12256] 00:21:16.986 bw ( KiB/s): min=35536, max=36200, per=100.00%, avg=35954.00, stdev=309.31, samples=4 00:21:16.986 iops : min= 8884, max= 9050, avg=8988.50, stdev=77.33, samples=4 00:21:16.986 lat (msec) : 2=0.02%, 4=0.11%, 10=99.72%, 20=0.15% 00:21:16.986 cpu : usr=66.70%, sys=30.46%, ctx=67, majf=0, minf=28 00:21:16.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:16.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:16.986 issued rwts: total=17996,18037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:16.986 00:21:16.986 Run status group 0 (all jobs): 00:21:16.986 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:21:16.986 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:16.986 23:24:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:17.244 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:17.244 fio-3.35 00:21:17.244 Starting 1 thread 00:21:17.244 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.771 00:21:19.771 test: (groupid=0, jobs=1): err= 0: pid=2400752: Mon Jul 15 23:24:34 2024 00:21:19.771 read: IOPS=8085, BW=126MiB/s (132MB/s)(254MiB/2008msec) 00:21:19.771 slat (usec): min=2, max=146, avg= 4.21, stdev= 2.50 00:21:19.771 clat (usec): min=2046, max=17848, avg=9266.69, stdev=2436.15 00:21:19.771 lat (usec): min=2050, max=17851, avg=9270.90, stdev=2436.20 00:21:19.771 clat percentiles (usec): 00:21:19.771 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7177], 00:21:19.771 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9634], 00:21:19.771 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13042], 95.00th=[13566], 00:21:19.771 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16909], 99.95th=[17433], 00:21:19.771 | 99.99th=[17695] 00:21:19.771 bw ( KiB/s): min=58528, max=74784, per=51.71%, avg=66888.00, stdev=8347.17, samples=4 00:21:19.771 iops : min= 3658, max= 4674, avg=4180.50, stdev=521.70, samples=4 00:21:19.771 write: IOPS=4899, BW=76.5MiB/s (80.3MB/s)(137MiB/1793msec); 0 zone resets 00:21:19.771 slat (usec): min=30, max=160, avg=38.20, stdev= 6.11 00:21:19.771 clat (usec): min=7363, max=18812, avg=11527.59, stdev=1868.68 00:21:19.771 lat (usec): min=7400, max=18849, avg=11565.78, stdev=1868.46 00:21:19.771 clat percentiles (usec): 00:21:19.771 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:19.771 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:21:19.771 | 70.00th=[12387], 80.00th=[13042], 90.00th=[13960], 95.00th=[14877], 00:21:19.771 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:21:19.771 | 99.99th=[18744] 00:21:19.771 bw ( KiB/s): min=60416, max=76896, per=88.78%, avg=69592.00, stdev=8524.00, samples=4 00:21:19.771 iops : min= 3776, max= 4806, avg=4349.50, stdev=532.75, samples=4 00:21:19.771 lat (msec) : 4=0.13%, 10=49.47%, 20=50.41% 00:21:19.771 cpu : usr=77.88%, sys=19.33%, ctx=51, majf=0, minf=63 00:21:19.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:19.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.771 issued rwts: total=16235,8784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.771 00:21:19.771 Run status group 0 (all jobs): 00:21:19.771 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=254MiB (266MB), run=2008-2008msec 00:21:19.771 WRITE: bw=76.5MiB/s (80.3MB/s), 76.5MiB/s-76.5MiB/s (80.3MB/s-80.3MB/s), io=137MiB (144MB), run=1793-1793msec 00:21:19.771 23:24:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.771 rmmod nvme_tcp 00:21:19.771 rmmod nvme_fabrics 00:21:19.771 rmmod nvme_keyring 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2399944 ']' 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2399944 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2399944 ']' 00:21:19.771 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2399944 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2399944 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2399944' 00:21:19.772 killing process with pid 2399944 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2399944 00:21:19.772 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2399944 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.338 23:24:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.239 23:24:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.239 00:21:22.239 real 0m12.261s 00:21:22.239 user 0m36.490s 00:21:22.239 sys 0m3.920s 00:21:22.239 23:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.239 23:24:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.239 ************************************ 00:21:22.239 END TEST nvmf_fio_host 00:21:22.239 ************************************ 00:21:22.239 23:24:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:22.239 23:24:37 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:22.239 23:24:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:22.239 23:24:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.239 23:24:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.239 ************************************ 00:21:22.239 START TEST nvmf_failover 00:21:22.239 ************************************ 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:22.239 * Looking for test storage... 00:21:22.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.239 23:24:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.240 23:24:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:24.764 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:24.764 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:24.764 Found net devices under 0000:84:00.0: cvl_0_0 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:24.764 Found net devices under 0000:84:00.1: cvl_0_1 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.764 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:21:24.765 00:21:24.765 --- 10.0.0.2 ping statistics --- 00:21:24.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.765 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:21:24.765 00:21:24.765 --- 10.0.0.1 ping statistics --- 00:21:24.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.765 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2402961 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2402961 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2402961 ']' 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.765 23:24:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.765 [2024-07-15 23:24:39.832102] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:21:24.765 [2024-07-15 23:24:39.832200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.765 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.765 [2024-07-15 23:24:39.899999] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:24.765 [2024-07-15 23:24:40.015092] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.765 [2024-07-15 23:24:40.015154] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.765 [2024-07-15 23:24:40.015184] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.765 [2024-07-15 23:24:40.015197] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.765 [2024-07-15 23:24:40.015208] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.765 [2024-07-15 23:24:40.016762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.765 [2024-07-15 23:24:40.016815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.765 [2024-07-15 23:24:40.016819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.021 23:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:25.277 [2024-07-15 23:24:40.382355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.277 23:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:25.533 Malloc0 00:21:25.533 23:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:25.789 23:24:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.045 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.302 [2024-07-15 23:24:41.381878] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.302 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:26.559 [2024-07-15 23:24:41.638565] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:26.559 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:26.817 [2024-07-15 23:24:41.891447] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2403251 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2403251 /var/tmp/bdevperf.sock 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2403251 ']' 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.817 23:24:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.075 23:24:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.075 23:24:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:27.075 23:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.639 NVMe0n1 00:21:27.639 23:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.897 00:21:27.897 23:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2403381 00:21:27.897 23:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.897 23:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:28.827 23:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.084 [2024-07-15 23:24:44.279303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 [2024-07-15 23:24:44.279475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49520 is same with the state(5) to be set 00:21:29.084 23:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:32.360 23:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.618 00:21:32.618 23:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:32.875 [2024-07-15 23:24:48.084470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4a450 is same with the state(5) to be set 00:21:32.875 23:24:48 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:36.237 23:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.237 [2024-07-15 23:24:51.363584] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.237 23:24:51 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:37.170 23:24:52 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:37.427 [2024-07-15 23:24:52.672569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.427 [2024-07-15 23:24:52.672819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.672989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.673000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.673011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.673025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.673057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 [2024-07-15 23:24:52.673071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(5) to be set 00:21:37.428 23:24:52 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2403381 00:21:43.993 0 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2403251 ']' 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2403251' 00:21:43.993 killing process with pid 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2403251 00:21:43.993 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.993 [2024-07-15 23:24:41.954018] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:21:43.993 [2024-07-15 23:24:41.954117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403251 ] 00:21:43.993 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.993 [2024-07-15 23:24:42.014829] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.993 [2024-07-15 23:24:42.127212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.993 Running I/O for 15 seconds... 00:21:43.993 [2024-07-15 23:24:44.281637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.281974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.281988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.993 [2024-07-15 23:24:44.282706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.993 [2024-07-15 23:24:44.282735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.282974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.282988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.283932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.994 [2024-07-15 23:24:44.283961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.283976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.994 [2024-07-15 23:24:44.283991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.284006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.284027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.284046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.994 [2024-07-15 23:24:44.284060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.994 [2024-07-15 23:24:44.284076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.995 [2024-07-15 23:24:44.284703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.284768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.284782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.284996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:21:43.995 [2024-07-15 23:24:44.285539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.995 [2024-07-15 23:24:44.285552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.995 [2024-07-15 23:24:44.285563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.995 [2024-07-15 23:24:44.285575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.285959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.285970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.285981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.285993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.996 [2024-07-15 23:24:44.286805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.996 [2024-07-15 23:24:44.286816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.996 [2024-07-15 23:24:44.286827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:21:43.996 [2024-07-15 23:24:44.286840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.286853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.286864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.286876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.286889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.286902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.286913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.286924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.286937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.286950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.286965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.286976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.286989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.997 [2024-07-15 23:24:44.287853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:21:43.997 [2024-07-15 23:24:44.287866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.997 [2024-07-15 23:24:44.287879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.997 [2024-07-15 23:24:44.287890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.287901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.287914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.287927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.287938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.287949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.287962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.287975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.287986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.287998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.288754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.288766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.288777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.288790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:21:43.998 [2024-07-15 23:24:44.295323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.998 [2024-07-15 23:24:44.295336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.998 [2024-07-15 23:24:44.295346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.998 [2024-07-15 23:24:44.295357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.295961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.295974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.295984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.295995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.999 [2024-07-15 23:24:44.296546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.999 [2024-07-15 23:24:44.296557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.999 [2024-07-15 23:24:44.296568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:21:43.999 [2024-07-15 23:24:44.296580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.296961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.296973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.296985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.296998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.000 [2024-07-15 23:24:44.297546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.000 [2024-07-15 23:24:44.297556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:21:44.000 [2024-07-15 23:24:44.297569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297628] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea0af0 was disconnected and freed. reset controller. 00:21:44.000 [2024-07-15 23:24:44.297645] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:44.000 [2024-07-15 23:24:44.297683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.000 [2024-07-15 23:24:44.297711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.000 [2024-07-15 23:24:44.297770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.000 [2024-07-15 23:24:44.297799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.000 [2024-07-15 23:24:44.297826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:44.297840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.000 [2024-07-15 23:24:44.297888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7a8d0 (9): Bad file descriptor 00:21:44.000 [2024-07-15 23:24:44.301174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.000 [2024-07-15 23:24:44.332940] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.000 [2024-07-15 23:24:48.085859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.000 [2024-07-15 23:24:48.085906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.000 [2024-07-15 23:24:48.085935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.000 [2024-07-15 23:24:48.085952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.085980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.085996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.001 [2024-07-15 23:24:48.086324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.001 [2024-07-15 23:24:48.086362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.086985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.086999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.087043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.001 [2024-07-15 23:24:48.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.001 [2024-07-15 23:24:48.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.001 [2024-07-15 23:24:48.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.001 [2024-07-15 23:24:48.087163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.001 [2024-07-15 23:24:48.087177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.087985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.087999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.002 [2024-07-15 23:24:48.088480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.002 [2024-07-15 23:24:48.088495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.088981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.088997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.003 [2024-07-15 23:24:48.089541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.003 [2024-07-15 23:24:48.089641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.003 [2024-07-15 23:24:48.089688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.003 [2024-07-15 23:24:48.089761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.003 [2024-07-15 23:24:48.089817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.003 [2024-07-15 23:24:48.089855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.003 [2024-07-15 23:24:48.089866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.003 [2024-07-15 23:24:48.089877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:21:44.003 [2024-07-15 23:24:48.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.089903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.089919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.089931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.089944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.089957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.089968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.089979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.089992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.090016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.090028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.090056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.090080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.090091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.090103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.090126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.090137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.090150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.004 [2024-07-15 23:24:48.090173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.004 [2024-07-15 23:24:48.090184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:21:44.004 [2024-07-15 23:24:48.090203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090265] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2045600 was disconnected and freed. reset controller. 00:21:44.004 [2024-07-15 23:24:48.090282] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:44.004 [2024-07-15 23:24:48.090315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.004 [2024-07-15 23:24:48.090337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.004 [2024-07-15 23:24:48.090365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.004 [2024-07-15 23:24:48.090405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.004 [2024-07-15 23:24:48.090432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:48.090445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.004 [2024-07-15 23:24:48.093704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.004 [2024-07-15 23:24:48.093775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7a8d0 (9): Bad file descriptor 00:21:44.004 [2024-07-15 23:24:48.209775] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.004 [2024-07-15 23:24:52.676149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.004 [2024-07-15 23:24:52.676197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.676980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.676994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.004 [2024-07-15 23:24:52.677010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.004 [2024-07-15 23:24:52.677025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.005 [2024-07-15 23:24:52.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.677982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.677996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.005 [2024-07-15 23:24:52.678195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.005 [2024-07-15 23:24:52.678227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.005 [2024-07-15 23:24:52.678243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23352 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23368 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23376 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23384 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23400 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23408 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23416 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23432 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23440 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23448 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.678953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.678964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23464 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.678977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.678990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23472 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23480 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23496 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23504 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23512 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23528 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23536 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23544 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.006 [2024-07-15 23:24:52.679491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.006 [2024-07-15 23:24:52.679502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.006 [2024-07-15 23:24:52.679513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:8 PRP1 0x0 PRP2 0x0 00:21:44.006 [2024-07-15 23:24:52.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23560 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23568 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23576 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23592 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23600 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23608 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.679954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.679966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.679977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23624 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23632 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23640 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23656 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23664 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23672 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23688 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23696 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23704 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23720 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23728 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23736 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.007 [2024-07-15 23:24:52.680786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.007 [2024-07-15 23:24:52.680797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.007 [2024-07-15 23:24:52.680808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23752 len:8 PRP1 0x0 PRP2 0x0 00:21:44.007 [2024-07-15 23:24:52.680822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.680835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.680846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.680857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23760 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.680870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.680883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.680895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.699971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23768 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23784 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23792 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23800 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22808 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22824 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22832 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22840 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.008 [2024-07-15 23:24:52.700640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.008 [2024-07-15 23:24:52.700652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22856 len:8 PRP1 0x0 PRP2 0x0 00:21:44.008 [2024-07-15 23:24:52.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700734] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20452c0 was disconnected and freed. reset controller. 00:21:44.008 [2024-07-15 23:24:52.700767] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:44.008 [2024-07-15 23:24:52.700809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.008 [2024-07-15 23:24:52.700828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.008 [2024-07-15 23:24:52.700859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.008 [2024-07-15 23:24:52.700898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.008 [2024-07-15 23:24:52.700926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.008 [2024-07-15 23:24:52.700944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.008 [2024-07-15 23:24:52.701000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7a8d0 (9): Bad file descriptor 00:21:44.008 [2024-07-15 23:24:52.704324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.008 [2024-07-15 23:24:52.737985] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.008 00:21:44.008 Latency(us) 00:21:44.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.008 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:44.008 Verification LBA range: start 0x0 length 0x4000 00:21:44.008 NVMe0n1 : 15.01 8326.32 32.52 469.96 0.00 14521.42 788.86 33010.73 00:21:44.008 =================================================================================================================== 00:21:44.008 Total : 8326.32 32.52 469.96 0.00 14521.42 788.86 33010.73 00:21:44.008 Received shutdown signal, test time was about 15.000000 seconds 00:21:44.008 00:21:44.008 Latency(us) 00:21:44.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.008 =================================================================================================================== 00:21:44.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2405177 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2405177 /var/tmp/bdevperf.sock 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2405177 ']' 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:44.008 23:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:44.008 [2024-07-15 23:24:59.026101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:44.008 23:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:44.008 [2024-07-15 23:24:59.274798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:44.008 23:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.570 NVMe0n1 00:21:44.570 23:24:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.827 00:21:44.827 23:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.084 00:21:45.340 23:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.340 23:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:45.340 23:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.596 23:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:48.870 23:25:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.870 23:25:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:48.870 23:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2405869 00:21:48.870 23:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.870 23:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2405869 00:21:50.243 0 00:21:50.243 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.243 [2024-07-15 23:24:58.532049] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:21:50.243 [2024-07-15 23:24:58.532162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405177 ] 00:21:50.243 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.243 [2024-07-15 23:24:58.592379] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.243 [2024-07-15 23:24:58.699428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.243 [2024-07-15 23:25:00.884983] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:50.243 [2024-07-15 23:25:00.885106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.243 [2024-07-15 23:25:00.885140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.243 [2024-07-15 23:25:00.885157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.243 [2024-07-15 23:25:00.885170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.243 [2024-07-15 23:25:00.885184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.243 [2024-07-15 23:25:00.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.243 [2024-07-15 23:25:00.885211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.243 [2024-07-15 23:25:00.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.243 [2024-07-15 23:25:00.885237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.243 [2024-07-15 23:25:00.885293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.243 [2024-07-15 23:25:00.885324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ae8d0 (9): Bad file descriptor 00:21:50.243 [2024-07-15 23:25:00.936323] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.243 Running I/O for 1 seconds... 00:21:50.243 00:21:50.243 Latency(us) 00:21:50.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.243 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:50.243 Verification LBA range: start 0x0 length 0x4000 00:21:50.243 NVMe0n1 : 1.00 8646.96 33.78 0.00 0.00 14742.08 1122.61 14757.74 00:21:50.243 =================================================================================================================== 00:21:50.243 Total : 8646.96 33.78 0.00 0.00 14742.08 1122.61 14757.74 00:21:50.243 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.243 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:50.501 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.759 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.759 23:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:51.016 23:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.274 23:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2405177 ']' 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405177' 00:21:54.553 killing process with pid 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2405177 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:54.553 23:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.811 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.811 rmmod nvme_tcp 00:21:55.069 rmmod nvme_fabrics 00:21:55.069 rmmod nvme_keyring 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2402961 ']' 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2402961 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2402961 ']' 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2402961 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2402961 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2402961' 00:21:55.069 killing process with pid 2402961 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2402961 00:21:55.069 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2402961 00:21:55.327 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.328 23:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.859 23:25:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.859 00:21:57.859 real 0m35.090s 00:21:57.859 user 2m2.991s 00:21:57.859 sys 0m6.185s 00:21:57.859 23:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.859 23:25:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:57.859 ************************************ 00:21:57.859 END TEST nvmf_failover 00:21:57.859 ************************************ 00:21:57.859 23:25:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:57.859 23:25:12 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:57.859 23:25:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:57.859 23:25:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.859 23:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.859 ************************************ 00:21:57.859 START TEST nvmf_host_discovery 00:21:57.859 ************************************ 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:57.859 * Looking for test storage... 00:21:57.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.859 23:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.860 23:25:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:59.761 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:59.761 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:59.761 Found net devices under 0000:84:00.0: cvl_0_0 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.761 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:59.762 Found net devices under 0000:84:00.1: cvl_0_1 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:59.762 00:21:59.762 --- 10.0.0.2 ping statistics --- 00:21:59.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.762 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:59.762 00:21:59.762 --- 10.0.0.1 ping statistics --- 00:21:59.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.762 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2408505 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2408505 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2408505 ']' 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.762 23:25:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.762 [2024-07-15 23:25:14.783405] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:21:59.762 [2024-07-15 23:25:14.783484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.762 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.762 [2024-07-15 23:25:14.847689] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.762 [2024-07-15 23:25:14.953327] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.762 [2024-07-15 23:25:14.953385] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.762 [2024-07-15 23:25:14.953407] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.762 [2024-07-15 23:25:14.953417] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.762 [2024-07-15 23:25:14.953427] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.762 [2024-07-15 23:25:14.953459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.762 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.762 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:59.762 23:25:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.762 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.762 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.020 [2024-07-15 23:25:15.087302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.020 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.021 [2024-07-15 23:25:15.095476] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.021 null0 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.021 null1 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2408534 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2408534 /tmp/host.sock 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2408534 ']' 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:00.021 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.021 23:25:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.021 [2024-07-15 23:25:15.169264] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:22:00.021 [2024-07-15 23:25:15.169331] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408534 ] 00:22:00.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.021 [2024-07-15 23:25:15.229709] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.279 [2024-07-15 23:25:15.347471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.843 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:01.101 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.102 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.360 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 [2024-07-15 23:25:16.447227] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:01.361 23:25:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:01.927 [2024-07-15 23:25:17.176474] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:01.927 [2024-07-15 23:25:17.176514] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:01.927 [2024-07-15 23:25:17.176540] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:02.185 [2024-07-15 23:25:17.305961] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:02.185 [2024-07-15 23:25:17.488674] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:02.185 [2024-07-15 23:25:17.488700] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:02.443 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.444 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:02.702 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 [2024-07-15 23:25:17.887448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.703 [2024-07-15 23:25:17.887805] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:02.703 [2024-07-15 23:25:17.887836] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:02.703 23:25:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.703 [2024-07-15 23:25:18.014265] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:02.961 23:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:02.961 23:25:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:03.220 [2024-07-15 23:25:18.318978] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:03.220 [2024-07-15 23:25:18.319001] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:03.220 [2024-07-15 23:25:18.319010] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.786 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.075 [2024-07-15 23:25:19.123349] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:04.075 [2024-07-15 23:25:19.123398] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:04.075 [2024-07-15 23:25:19.129679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.075 [2024-07-15 23:25:19.129717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.075 [2024-07-15 23:25:19.129744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.075 [2024-07-15 23:25:19.129763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.075 [2024-07-15 23:25:19.129794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.075 [2024-07-15 23:25:19.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.075 [2024-07-15 23:25:19.129822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.075 [2024-07-15 23:25:19.129835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.075 [2024-07-15 23:25:19.129848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.075 [2024-07-15 23:25:19.139684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.075 [2024-07-15 23:25:19.149725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.075 [2024-07-15 23:25:19.150007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.075 [2024-07-15 23:25:19.150052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.075 [2024-07-15 23:25:19.150072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.075 [2024-07-15 23:25:19.150098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.075 [2024-07-15 23:25:19.150123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.075 [2024-07-15 23:25:19.150140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.075 [2024-07-15 23:25:19.150158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.075 [2024-07-15 23:25:19.150180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.075 [2024-07-15 23:25:19.159832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.075 [2024-07-15 23:25:19.160040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.075 [2024-07-15 23:25:19.160071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.075 [2024-07-15 23:25:19.160089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.075 [2024-07-15 23:25:19.160114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.075 [2024-07-15 23:25:19.160137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.075 [2024-07-15 23:25:19.160152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.075 [2024-07-15 23:25:19.160167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.075 [2024-07-15 23:25:19.160188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.075 [2024-07-15 23:25:19.169901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.075 [2024-07-15 23:25:19.170099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.075 [2024-07-15 23:25:19.170152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.075 [2024-07-15 23:25:19.170173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.075 [2024-07-15 23:25:19.170198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.075 [2024-07-15 23:25:19.170222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.075 [2024-07-15 23:25:19.170238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.075 [2024-07-15 23:25:19.170253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.075 [2024-07-15 23:25:19.170275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.075 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.076 [2024-07-15 23:25:19.180527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.076 [2024-07-15 23:25:19.180754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.076 [2024-07-15 23:25:19.180810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.076 [2024-07-15 23:25:19.180827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.076 [2024-07-15 23:25:19.180849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.076 [2024-07-15 23:25:19.180869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.076 [2024-07-15 23:25:19.180882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.076 [2024-07-15 23:25:19.180895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.076 [2024-07-15 23:25:19.180913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.076 [2024-07-15 23:25:19.190610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.076 [2024-07-15 23:25:19.190852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.076 [2024-07-15 23:25:19.190882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.076 [2024-07-15 23:25:19.190899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.076 [2024-07-15 23:25:19.190922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.076 [2024-07-15 23:25:19.190944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.076 [2024-07-15 23:25:19.190958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.076 [2024-07-15 23:25:19.190972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.076 [2024-07-15 23:25:19.190991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.076 [2024-07-15 23:25:19.200702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.076 [2024-07-15 23:25:19.200902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.076 [2024-07-15 23:25:19.200931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eed3d0 with addr=10.0.0.2, port=4420 00:22:04.076 [2024-07-15 23:25:19.200948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eed3d0 is same with the state(5) to be set 00:22:04.076 [2024-07-15 23:25:19.200976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eed3d0 (9): Bad file descriptor 00:22:04.076 [2024-07-15 23:25:19.200998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:04.076 [2024-07-15 23:25:19.201012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:04.076 [2024-07-15 23:25:19.201043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:04.076 [2024-07-15 23:25:19.201064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.076 [2024-07-15 23:25:19.209634] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:04.076 [2024-07-15 23:25:19.209666] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.076 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.356 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.357 23:25:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.290 [2024-07-15 23:25:20.516969] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:05.290 [2024-07-15 23:25:20.517008] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:05.290 [2024-07-15 23:25:20.517052] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.290 [2024-07-15 23:25:20.603308] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:05.548 [2024-07-15 23:25:20.711135] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.548 [2024-07-15 23:25:20.711187] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 request: 00:22:05.548 { 00:22:05.548 "name": "nvme", 00:22:05.548 "trtype": "tcp", 00:22:05.548 "traddr": "10.0.0.2", 00:22:05.548 "adrfam": "ipv4", 00:22:05.548 "trsvcid": "8009", 00:22:05.548 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:05.548 "wait_for_attach": true, 00:22:05.548 "method": "bdev_nvme_start_discovery", 00:22:05.548 "req_id": 1 00:22:05.548 } 00:22:05.548 Got JSON-RPC error response 00:22:05.548 response: 00:22:05.548 { 00:22:05.548 "code": -17, 00:22:05.548 "message": "File exists" 00:22:05.548 } 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.548 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 request: 00:22:05.548 { 00:22:05.548 "name": "nvme_second", 00:22:05.548 "trtype": "tcp", 00:22:05.548 "traddr": "10.0.0.2", 00:22:05.548 "adrfam": "ipv4", 00:22:05.548 "trsvcid": "8009", 00:22:05.548 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:05.548 "wait_for_attach": true, 00:22:05.548 "method": "bdev_nvme_start_discovery", 00:22:05.548 "req_id": 1 00:22:05.548 } 00:22:05.549 Got JSON-RPC error response 00:22:05.549 response: 00:22:05.549 { 00:22:05.549 "code": -17, 00:22:05.549 "message": "File exists" 00:22:05.549 } 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:05.549 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.807 23:25:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.740 [2024-07-15 23:25:21.922783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.740 [2024-07-15 23:25:21.922842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeb120 with addr=10.0.0.2, port=8010 00:22:06.740 [2024-07-15 23:25:21.922873] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:06.740 [2024-07-15 23:25:21.922887] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:06.740 [2024-07-15 23:25:21.922898] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:07.675 [2024-07-15 23:25:22.925211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.675 [2024-07-15 23:25:22.925295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeb120 with addr=10.0.0.2, port=8010 00:22:07.675 [2024-07-15 23:25:22.925333] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:07.675 [2024-07-15 23:25:22.925358] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:07.675 [2024-07-15 23:25:22.925373] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:09.045 [2024-07-15 23:25:23.927255] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:09.045 request: 00:22:09.045 { 00:22:09.045 "name": "nvme_second", 00:22:09.045 "trtype": "tcp", 00:22:09.045 "traddr": "10.0.0.2", 00:22:09.045 "adrfam": "ipv4", 00:22:09.045 "trsvcid": "8010", 00:22:09.045 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:09.045 "wait_for_attach": false, 00:22:09.045 "attach_timeout_ms": 3000, 00:22:09.045 "method": "bdev_nvme_start_discovery", 00:22:09.045 "req_id": 1 00:22:09.045 } 00:22:09.045 Got JSON-RPC error response 00:22:09.045 response: 00:22:09.045 { 00:22:09.045 "code": -110, 00:22:09.045 "message": "Connection timed out" 00:22:09.045 } 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:09.045 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2408534 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.046 23:25:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.046 rmmod nvme_tcp 00:22:09.046 rmmod nvme_fabrics 00:22:09.046 rmmod nvme_keyring 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2408505 ']' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2408505 ']' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2408505' 00:22:09.046 killing process with pid 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2408505 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.046 23:25:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.575 00:22:11.575 real 0m13.761s 00:22:11.575 user 0m20.642s 00:22:11.575 sys 0m2.790s 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.575 ************************************ 00:22:11.575 END TEST nvmf_host_discovery 00:22:11.575 ************************************ 00:22:11.575 23:25:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:11.575 23:25:26 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:11.575 23:25:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.575 23:25:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.575 23:25:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.575 ************************************ 00:22:11.575 START TEST nvmf_host_multipath_status 00:22:11.575 ************************************ 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:11.575 * Looking for test storage... 00:22:11.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.575 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.576 23:25:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:13.474 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:13.475 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:13.475 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:13.475 Found net devices under 0000:84:00.0: cvl_0_0 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:13.475 Found net devices under 0000:84:00.1: cvl_0_1 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:13.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:22:13.475 00:22:13.475 --- 10.0.0.2 ping statistics --- 00:22:13.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.475 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:22:13.475 00:22:13.475 --- 10.0.0.1 ping statistics --- 00:22:13.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.475 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2411707 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2411707 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2411707 ']' 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.475 23:25:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:13.475 [2024-07-15 23:25:28.702704] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:22:13.475 [2024-07-15 23:25:28.702794] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.475 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.475 [2024-07-15 23:25:28.772673] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.733 [2024-07-15 23:25:28.884583] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.733 [2024-07-15 23:25:28.884642] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.733 [2024-07-15 23:25:28.884655] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.733 [2024-07-15 23:25:28.884666] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.733 [2024-07-15 23:25:28.884675] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.733 [2024-07-15 23:25:28.884822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.733 [2024-07-15 23:25:28.884829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.665 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2411707 00:22:14.666 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:14.666 [2024-07-15 23:25:29.892901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.666 23:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:14.923 Malloc0 00:22:14.923 23:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:15.181 23:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.439 23:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.696 [2024-07-15 23:25:30.904073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.696 23:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:15.954 [2024-07-15 23:25:31.148753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2412001 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2412001 /var/tmp/bdevperf.sock 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2412001 ']' 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.954 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.211 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.211 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:16.211 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:16.470 23:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:17.035 Nvme0n1 00:22:17.035 23:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:17.292 Nvme0n1 00:22:17.292 23:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:17.292 23:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:19.819 23:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:19.819 23:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:19.819 23:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:20.077 23:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:21.012 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:21.012 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:21.012 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.012 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.270 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.270 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:21.270 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.270 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.527 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.527 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.528 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.528 23:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.786 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.786 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.786 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.786 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.043 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.043 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.043 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.043 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.302 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.302 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.302 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.302 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.867 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.868 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:22.868 23:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.868 23:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:23.433 23:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:24.366 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:24.366 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:24.366 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.366 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.623 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.623 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.623 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.623 23:25:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.880 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.880 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.880 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.880 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.137 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.137 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.137 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.137 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.395 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.395 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.395 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.395 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.653 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.653 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.653 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.653 23:25:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.911 23:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.911 23:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:25.911 23:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.169 23:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:26.427 23:25:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:27.361 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:27.361 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:27.361 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.361 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.926 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.926 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:27.926 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.926 23:25:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.926 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.926 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.926 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.926 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.200 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.200 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.200 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.200 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.495 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.495 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.495 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.495 23:25:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:29.061 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.319 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:29.884 23:25:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:30.817 23:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:30.817 23:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.817 23:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.817 23:25:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.075 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.075 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.075 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.075 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.333 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.333 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.333 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.333 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.590 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.590 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.590 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.590 23:25:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.848 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.848 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.848 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.848 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.105 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.105 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.105 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.105 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.363 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.363 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:32.363 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:32.621 23:25:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:32.878 23:25:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:34.247 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.248 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.504 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.504 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.504 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.504 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.761 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.761 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.761 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.761 23:25:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:35.018 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.018 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:35.018 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.018 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.275 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.275 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:35.275 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.275 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.533 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.533 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:35.533 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:35.790 23:25:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:36.048 23:25:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:36.976 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:36.976 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:36.976 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.976 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.232 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.232 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:37.232 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.232 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.488 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.488 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.489 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.489 23:25:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.744 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.744 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.744 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.744 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.307 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.872 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.872 23:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:38.872 23:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:38.872 23:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:39.128 23:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:39.690 23:25:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:40.622 23:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:40.622 23:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.622 23:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.622 23:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.880 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.880 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:40.880 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.880 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:41.138 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.138 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:41.138 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.138 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.396 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.396 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.396 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.396 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.653 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.653 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.653 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.653 23:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.911 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.911 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.911 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.911 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:42.168 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.168 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:42.168 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:42.426 23:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:42.989 23:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:43.919 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:43.919 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:43.920 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.920 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:44.177 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.177 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:44.178 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.178 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:44.434 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.434 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:44.434 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.434 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.691 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.691 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.691 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.691 23:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.949 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.949 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.949 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.949 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:45.207 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.207 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:45.207 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.207 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:45.464 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.464 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:45.464 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:45.721 23:26:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:45.977 23:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.348 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:47.605 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.605 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:47.605 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.605 23:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.864 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.864 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.864 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.864 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:48.120 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.120 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:48.120 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.120 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.682 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.682 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:48.682 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.682 23:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:48.952 23:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.952 23:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:48.952 23:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:49.288 23:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:49.545 23:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:50.476 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:50.476 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:50.476 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.476 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.733 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.733 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:50.733 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.733 23:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.990 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.990 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.990 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.990 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.247 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.247 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.247 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.247 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.504 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.504 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.504 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.504 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.761 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.761 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:51.761 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.761 23:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2412001 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2412001 ']' 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2412001 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2412001 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2412001' 00:22:52.018 killing process with pid 2412001 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2412001 00:22:52.018 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2412001 00:22:52.286 Connection closed with partial response: 00:22:52.286 00:22:52.286 00:22:52.286 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2412001 00:22:52.286 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.286 [2024-07-15 23:25:31.211298] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:22:52.286 [2024-07-15 23:25:31.211397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412001 ] 00:22:52.286 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.286 [2024-07-15 23:25:31.272329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.286 [2024-07-15 23:25:31.382414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.286 Running I/O for 90 seconds... 00:22:52.286 [2024-07-15 23:25:47.905621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.905811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.905855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.905895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.905935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.905975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.905997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.906020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.286 [2024-07-15 23:25:47.906073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.906124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.906162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.906208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.906245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.286 [2024-07-15 23:25:47.906281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.286 [2024-07-15 23:25:47.906301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.287 [2024-07-15 23:25:47.906568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.906964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.906986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.907962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.907986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.287 [2024-07-15 23:25:47.908610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.287 [2024-07-15 23:25:47.908625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.908970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.908987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.909970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.909997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.288 [2024-07-15 23:25:47.910829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.288 [2024-07-15 23:25:47.910859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.910877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.910907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.910924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.910955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.910980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:25:47.911896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:25:47.911912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.577777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.577838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.577921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.577960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.577993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.289 [2024-07-15 23:26:04.578807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.289 [2024-07-15 23:26:04.578830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.578846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.578868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.578885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.578908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.578925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.578947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.578986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.290 [2024-07-15 23:26:04.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.579840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.579857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.290 [2024-07-15 23:26:04.581617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.290 [2024-07-15 23:26:04.581640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.581656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.581911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.581935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.581963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.581981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.582657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.582934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.582951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.291 [2024-07-15 23:26:04.583973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.583995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.291 [2024-07-15 23:26:04.584011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.291 [2024-07-15 23:26:04.584048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.584298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.584527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.584599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.584621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.584637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.586835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.586889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.586907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.586931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.586947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.586970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.586991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.587328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.587364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.587400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.587436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.587458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.587473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.588954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.588983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.292 [2024-07-15 23:26:04.589459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.589495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.292 [2024-07-15 23:26:04.589517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.292 [2024-07-15 23:26:04.589533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.589781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.589827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.589866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.589906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.589954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.589977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.589993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.590052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.590106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.590217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.590254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.590518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.590533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.591665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.591747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.591766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.592869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.293 [2024-07-15 23:26:04.592894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.592921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.592939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.592968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.293 [2024-07-15 23:26:04.593264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.293 [2024-07-15 23:26:04.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.593613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.593708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.593747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.595733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.595912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.595934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.607894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.294 [2024-07-15 23:26:04.607962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.294 [2024-07-15 23:26:04.608246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.294 [2024-07-15 23:26:04.608855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.608882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.608953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.608970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.608992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.609278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.609314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.609371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.609387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.611508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.611550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.611627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.611964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.611988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.295 [2024-07-15 23:26:04.612516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.295 [2024-07-15 23:26:04.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.295 [2024-07-15 23:26:04.612573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.612625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.612661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.612697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.612811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.612850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.612888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.612928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.612951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.612967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.615673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.615961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.615984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.296 [2024-07-15 23:26:04.616661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.296 [2024-07-15 23:26:04.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.296 [2024-07-15 23:26:04.616697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.617466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.617585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.617630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.617977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.617999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.618016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.618069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.618991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.619927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.619966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.619988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.620009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.620059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.297 [2024-07-15 23:26:04.620076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.297 [2024-07-15 23:26:04.620114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.297 [2024-07-15 23:26:04.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.620238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.620274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.620497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.620687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.622567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.622609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.622646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.622682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.622963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.622980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.623018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.623143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.623366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.623402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.623508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.623565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.298 [2024-07-15 23:26:04.623580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.625871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.298 [2024-07-15 23:26:04.625923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.298 [2024-07-15 23:26:04.625941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.625963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.625979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.626886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.626983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.626999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.627255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.627418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.627433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.299 [2024-07-15 23:26:04.628871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.299 [2024-07-15 23:26:04.628910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.299 [2024-07-15 23:26:04.628932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.628949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.628971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.628988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.629840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.629905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.629945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.629967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.629984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.630528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.630696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.630712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.631281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.631365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.631400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.300 [2024-07-15 23:26:04.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.300 [2024-07-15 23:26:04.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.300 [2024-07-15 23:26:04.631696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.300 Received shutdown signal, test time was about 34.531068 seconds 00:22:52.300 00:22:52.300 Latency(us) 00:22:52.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.300 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.300 Verification LBA range: start 0x0 length 0x4000 00:22:52.300 Nvme0n1 : 34.53 8331.15 32.54 0.00 0.00 15339.48 191.15 4026531.84 00:22:52.300 =================================================================================================================== 00:22:52.300 Total : 8331.15 32.54 0.00 0.00 15339.48 191.15 4026531.84 00:22:52.301 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.558 rmmod nvme_tcp 00:22:52.558 rmmod nvme_fabrics 00:22:52.558 rmmod nvme_keyring 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2411707 ']' 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2411707 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2411707 ']' 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2411707 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.558 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2411707 00:22:52.815 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.815 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.815 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2411707' 00:22:52.815 killing process with pid 2411707 00:22:52.815 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2411707 00:22:52.815 23:26:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2411707 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.073 23:26:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.973 23:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.973 00:22:54.973 real 0m43.800s 00:22:54.973 user 2m11.794s 00:22:54.973 sys 0m12.106s 00:22:54.973 23:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.973 23:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.973 ************************************ 00:22:54.973 END TEST nvmf_host_multipath_status 00:22:54.973 ************************************ 00:22:54.973 23:26:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:54.973 23:26:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:54.973 23:26:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:54.973 23:26:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.973 23:26:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.973 ************************************ 00:22:54.973 START TEST nvmf_discovery_remove_ifc 00:22:54.973 ************************************ 00:22:54.973 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:55.232 * Looking for test storage... 00:22:55.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.232 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.233 23:26:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.132 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:57.133 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:57.133 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:57.133 Found net devices under 0000:84:00.0: cvl_0_0 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:57.133 Found net devices under 0000:84:00.1: cvl_0_1 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.133 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:22:57.391 00:22:57.391 --- 10.0.0.2 ping statistics --- 00:22:57.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.391 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:57.391 00:22:57.391 --- 10.0.0.1 ping statistics --- 00:22:57.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.391 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2418480 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2418480 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2418480 ']' 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.391 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.391 [2024-07-15 23:26:12.625429] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:22:57.391 [2024-07-15 23:26:12.625530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.391 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.391 [2024-07-15 23:26:12.689542] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.649 [2024-07-15 23:26:12.799141] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.649 [2024-07-15 23:26:12.799197] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.649 [2024-07-15 23:26:12.799221] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.649 [2024-07-15 23:26:12.799233] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.649 [2024-07-15 23:26:12.799243] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.649 [2024-07-15 23:26:12.799301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.649 23:26:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.649 [2024-07-15 23:26:12.949689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.649 [2024-07-15 23:26:12.957882] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:57.908 null0 00:22:57.908 [2024-07-15 23:26:12.989824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2418572 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2418572 /tmp/host.sock 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2418572 ']' 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:57.908 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.908 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.908 [2024-07-15 23:26:13.057423] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:22:57.908 [2024-07-15 23:26:13.057506] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418572 ] 00:22:57.908 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.908 [2024-07-15 23:26:13.118058] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.171 [2024-07-15 23:26:13.237337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.171 23:26:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.543 [2024-07-15 23:26:14.448588] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:59.543 [2024-07-15 23:26:14.448620] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:59.543 [2024-07-15 23:26:14.448653] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.543 [2024-07-15 23:26:14.577069] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:59.543 [2024-07-15 23:26:14.638774] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:59.543 [2024-07-15 23:26:14.638837] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:59.543 [2024-07-15 23:26:14.638878] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:59.543 [2024-07-15 23:26:14.638902] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:59.543 [2024-07-15 23:26:14.638934] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.543 [2024-07-15 23:26:14.645828] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1167110 was disconnected and freed. delete nvme_qpair. 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:59.543 23:26:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.477 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.735 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.735 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.735 23:26:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.669 23:26:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:02.602 23:26:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:03.977 23:26:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.910 23:26:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.910 23:26:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:04.910 23:26:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.910 [2024-07-15 23:26:20.080139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:04.910 [2024-07-15 23:26:20.080222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.910 [2024-07-15 23:26:20.080247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.910 [2024-07-15 23:26:20.080268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.910 [2024-07-15 23:26:20.080284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.910 [2024-07-15 23:26:20.080300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.910 [2024-07-15 23:26:20.080316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.910 [2024-07-15 23:26:20.080333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.910 [2024-07-15 23:26:20.080349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.910 [2024-07-15 23:26:20.080366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.910 [2024-07-15 23:26:20.080383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.910 [2024-07-15 23:26:20.080398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112db30 is same with the state(5) to be set 00:23:04.910 [2024-07-15 23:26:20.090153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112db30 (9): Bad file descriptor 00:23:04.910 [2024-07-15 23:26:20.100208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:05.843 [2024-07-15 23:26:21.127789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:05.843 [2024-07-15 23:26:21.127866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112db30 with addr=10.0.0.2, port=4420 00:23:05.843 [2024-07-15 23:26:21.127902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112db30 is same with the state(5) to be set 00:23:05.843 [2024-07-15 23:26:21.127971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112db30 (9): Bad file descriptor 00:23:05.843 [2024-07-15 23:26:21.128502] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:05.843 [2024-07-15 23:26:21.128539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.843 [2024-07-15 23:26:21.128566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.843 [2024-07-15 23:26:21.128587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.843 [2024-07-15 23:26:21.128625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.843 [2024-07-15 23:26:21.128646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:05.843 23:26:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:07.216 [2024-07-15 23:26:22.131150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.216 [2024-07-15 23:26:22.131187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.216 [2024-07-15 23:26:22.131202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.216 [2024-07-15 23:26:22.131216] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:07.216 [2024-07-15 23:26:22.131237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.217 [2024-07-15 23:26:22.131272] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:07.217 [2024-07-15 23:26:22.131314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.217 [2024-07-15 23:26:22.131335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.217 [2024-07-15 23:26:22.131354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.217 [2024-07-15 23:26:22.131368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.217 [2024-07-15 23:26:22.131382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.217 [2024-07-15 23:26:22.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.217 [2024-07-15 23:26:22.131408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.217 [2024-07-15 23:26:22.131422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.217 [2024-07-15 23:26:22.131437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.217 [2024-07-15 23:26:22.131450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.217 [2024-07-15 23:26:22.131464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:07.217 [2024-07-15 23:26:22.131560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112cfb0 (9): Bad file descriptor 00:23:07.217 [2024-07-15 23:26:22.132590] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:07.217 [2024-07-15 23:26:22.132613] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:07.217 23:26:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:08.149 23:26:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.080 [2024-07-15 23:26:24.143012] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.080 [2024-07-15 23:26:24.143061] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.080 [2024-07-15 23:26:24.143100] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.080 [2024-07-15 23:26:24.229352] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:09.080 23:26:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.338 [2024-07-15 23:26:24.406612] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:09.338 [2024-07-15 23:26:24.406663] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:09.338 [2024-07-15 23:26:24.406696] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:09.338 [2024-07-15 23:26:24.406731] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:09.338 [2024-07-15 23:26:24.406754] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:09.338 [2024-07-15 23:26:24.411660] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1172e30 was disconnected and freed. delete nvme_qpair. 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2418572 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2418572 ']' 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2418572 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418572 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418572' 00:23:10.270 killing process with pid 2418572 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2418572 00:23:10.270 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2418572 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.527 rmmod nvme_tcp 00:23:10.527 rmmod nvme_fabrics 00:23:10.527 rmmod nvme_keyring 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2418480 ']' 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2418480 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2418480 ']' 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2418480 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418480 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418480' 00:23:10.527 killing process with pid 2418480 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2418480 00:23:10.527 23:26:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2418480 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.784 23:26:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.375 23:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.375 00:23:13.375 real 0m17.829s 00:23:13.375 user 0m25.757s 00:23:13.375 sys 0m3.079s 00:23:13.375 23:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.375 23:26:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 ************************************ 00:23:13.375 END TEST nvmf_discovery_remove_ifc 00:23:13.375 ************************************ 00:23:13.375 23:26:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:13.375 23:26:28 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:13.375 23:26:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.375 23:26:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.375 23:26:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 ************************************ 00:23:13.375 START TEST nvmf_identify_kernel_target 00:23:13.375 ************************************ 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:13.375 * Looking for test storage... 00:23:13.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.375 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.376 23:26:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:15.272 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:15.272 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:15.272 Found net devices under 0000:84:00.0: cvl_0_0 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:15.272 Found net devices under 0000:84:00.1: cvl_0_1 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.272 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:23:15.273 00:23:15.273 --- 10.0.0.2 ping statistics --- 00:23:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.273 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:23:15.273 00:23:15.273 --- 10.0.0.1 ping statistics --- 00:23:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.273 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:15.273 23:26:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:16.207 Waiting for block devices as requested 00:23:16.207 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:16.465 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:16.465 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:16.721 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:16.721 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:16.721 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:16.721 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:16.721 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:16.978 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:16.978 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:16.978 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:16.978 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:17.235 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:17.235 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.235 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.235 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:17.492 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:17.492 No valid GPT data, bailing 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:17.492 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:17.750 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:17.750 00:23:17.750 Discovery Log Number of Records 2, Generation counter 2 00:23:17.750 =====Discovery Log Entry 0====== 00:23:17.750 trtype: tcp 00:23:17.750 adrfam: ipv4 00:23:17.750 subtype: current discovery subsystem 00:23:17.751 treq: not specified, sq flow control disable supported 00:23:17.751 portid: 1 00:23:17.751 trsvcid: 4420 00:23:17.751 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:17.751 traddr: 10.0.0.1 00:23:17.751 eflags: none 00:23:17.751 sectype: none 00:23:17.751 =====Discovery Log Entry 1====== 00:23:17.751 trtype: tcp 00:23:17.751 adrfam: ipv4 00:23:17.751 subtype: nvme subsystem 00:23:17.751 treq: not specified, sq flow control disable supported 00:23:17.751 portid: 1 00:23:17.751 trsvcid: 4420 00:23:17.751 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:17.751 traddr: 10.0.0.1 00:23:17.751 eflags: none 00:23:17.751 sectype: none 00:23:17.751 23:26:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:17.751 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:17.751 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.751 ===================================================== 00:23:17.751 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:17.751 ===================================================== 00:23:17.751 Controller Capabilities/Features 00:23:17.751 ================================ 00:23:17.751 Vendor ID: 0000 00:23:17.751 Subsystem Vendor ID: 0000 00:23:17.751 Serial Number: a529dac17a1e11d393cb 00:23:17.751 Model Number: Linux 00:23:17.751 Firmware Version: 6.7.0-68 00:23:17.751 Recommended Arb Burst: 0 00:23:17.751 IEEE OUI Identifier: 00 00 00 00:23:17.751 Multi-path I/O 00:23:17.751 May have multiple subsystem ports: No 00:23:17.751 May have multiple controllers: No 00:23:17.751 Associated with SR-IOV VF: No 00:23:17.751 Max Data Transfer Size: Unlimited 00:23:17.751 Max Number of Namespaces: 0 00:23:17.751 Max Number of I/O Queues: 1024 00:23:17.751 NVMe Specification Version (VS): 1.3 00:23:17.751 NVMe Specification Version (Identify): 1.3 00:23:17.751 Maximum Queue Entries: 1024 00:23:17.751 Contiguous Queues Required: No 00:23:17.751 Arbitration Mechanisms Supported 00:23:17.751 Weighted Round Robin: Not Supported 00:23:17.751 Vendor Specific: Not Supported 00:23:17.751 Reset Timeout: 7500 ms 00:23:17.751 Doorbell Stride: 4 bytes 00:23:17.751 NVM Subsystem Reset: Not Supported 00:23:17.751 Command Sets Supported 00:23:17.751 NVM Command Set: Supported 00:23:17.751 Boot Partition: Not Supported 00:23:17.751 Memory Page Size Minimum: 4096 bytes 00:23:17.751 Memory Page Size Maximum: 4096 bytes 00:23:17.751 Persistent Memory Region: Not Supported 00:23:17.751 Optional Asynchronous Events Supported 00:23:17.751 Namespace Attribute Notices: Not Supported 00:23:17.751 Firmware Activation Notices: Not Supported 00:23:17.751 ANA Change Notices: Not Supported 00:23:17.751 PLE Aggregate Log Change Notices: Not Supported 00:23:17.751 LBA Status Info Alert Notices: Not Supported 00:23:17.751 EGE Aggregate Log Change Notices: Not Supported 00:23:17.751 Normal NVM Subsystem Shutdown event: Not Supported 00:23:17.751 Zone Descriptor Change Notices: Not Supported 00:23:17.751 Discovery Log Change Notices: Supported 00:23:17.751 Controller Attributes 00:23:17.751 128-bit Host Identifier: Not Supported 00:23:17.751 Non-Operational Permissive Mode: Not Supported 00:23:17.751 NVM Sets: Not Supported 00:23:17.751 Read Recovery Levels: Not Supported 00:23:17.751 Endurance Groups: Not Supported 00:23:17.751 Predictable Latency Mode: Not Supported 00:23:17.751 Traffic Based Keep ALive: Not Supported 00:23:17.751 Namespace Granularity: Not Supported 00:23:17.751 SQ Associations: Not Supported 00:23:17.751 UUID List: Not Supported 00:23:17.751 Multi-Domain Subsystem: Not Supported 00:23:17.751 Fixed Capacity Management: Not Supported 00:23:17.751 Variable Capacity Management: Not Supported 00:23:17.751 Delete Endurance Group: Not Supported 00:23:17.751 Delete NVM Set: Not Supported 00:23:17.751 Extended LBA Formats Supported: Not Supported 00:23:17.751 Flexible Data Placement Supported: Not Supported 00:23:17.751 00:23:17.751 Controller Memory Buffer Support 00:23:17.751 ================================ 00:23:17.751 Supported: No 00:23:17.751 00:23:17.751 Persistent Memory Region Support 00:23:17.751 ================================ 00:23:17.751 Supported: No 00:23:17.751 00:23:17.751 Admin Command Set Attributes 00:23:17.751 ============================ 00:23:17.751 Security Send/Receive: Not Supported 00:23:17.751 Format NVM: Not Supported 00:23:17.751 Firmware Activate/Download: Not Supported 00:23:17.751 Namespace Management: Not Supported 00:23:17.751 Device Self-Test: Not Supported 00:23:17.751 Directives: Not Supported 00:23:17.751 NVMe-MI: Not Supported 00:23:17.751 Virtualization Management: Not Supported 00:23:17.751 Doorbell Buffer Config: Not Supported 00:23:17.751 Get LBA Status Capability: Not Supported 00:23:17.751 Command & Feature Lockdown Capability: Not Supported 00:23:17.751 Abort Command Limit: 1 00:23:17.751 Async Event Request Limit: 1 00:23:17.751 Number of Firmware Slots: N/A 00:23:17.751 Firmware Slot 1 Read-Only: N/A 00:23:17.751 Firmware Activation Without Reset: N/A 00:23:17.751 Multiple Update Detection Support: N/A 00:23:17.751 Firmware Update Granularity: No Information Provided 00:23:17.751 Per-Namespace SMART Log: No 00:23:17.751 Asymmetric Namespace Access Log Page: Not Supported 00:23:17.751 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:17.751 Command Effects Log Page: Not Supported 00:23:17.751 Get Log Page Extended Data: Supported 00:23:17.751 Telemetry Log Pages: Not Supported 00:23:17.751 Persistent Event Log Pages: Not Supported 00:23:17.751 Supported Log Pages Log Page: May Support 00:23:17.751 Commands Supported & Effects Log Page: Not Supported 00:23:17.751 Feature Identifiers & Effects Log Page:May Support 00:23:17.751 NVMe-MI Commands & Effects Log Page: May Support 00:23:17.751 Data Area 4 for Telemetry Log: Not Supported 00:23:17.751 Error Log Page Entries Supported: 1 00:23:17.751 Keep Alive: Not Supported 00:23:17.751 00:23:17.751 NVM Command Set Attributes 00:23:17.751 ========================== 00:23:17.751 Submission Queue Entry Size 00:23:17.751 Max: 1 00:23:17.751 Min: 1 00:23:17.751 Completion Queue Entry Size 00:23:17.751 Max: 1 00:23:17.751 Min: 1 00:23:17.751 Number of Namespaces: 0 00:23:17.751 Compare Command: Not Supported 00:23:17.751 Write Uncorrectable Command: Not Supported 00:23:17.751 Dataset Management Command: Not Supported 00:23:17.751 Write Zeroes Command: Not Supported 00:23:17.751 Set Features Save Field: Not Supported 00:23:17.751 Reservations: Not Supported 00:23:17.751 Timestamp: Not Supported 00:23:17.751 Copy: Not Supported 00:23:17.751 Volatile Write Cache: Not Present 00:23:17.751 Atomic Write Unit (Normal): 1 00:23:17.751 Atomic Write Unit (PFail): 1 00:23:17.751 Atomic Compare & Write Unit: 1 00:23:17.751 Fused Compare & Write: Not Supported 00:23:17.751 Scatter-Gather List 00:23:17.751 SGL Command Set: Supported 00:23:17.751 SGL Keyed: Not Supported 00:23:17.751 SGL Bit Bucket Descriptor: Not Supported 00:23:17.751 SGL Metadata Pointer: Not Supported 00:23:17.751 Oversized SGL: Not Supported 00:23:17.751 SGL Metadata Address: Not Supported 00:23:17.751 SGL Offset: Supported 00:23:17.751 Transport SGL Data Block: Not Supported 00:23:17.751 Replay Protected Memory Block: Not Supported 00:23:17.751 00:23:17.751 Firmware Slot Information 00:23:17.751 ========================= 00:23:17.751 Active slot: 0 00:23:17.751 00:23:17.751 00:23:17.751 Error Log 00:23:17.751 ========= 00:23:17.751 00:23:17.751 Active Namespaces 00:23:17.751 ================= 00:23:17.751 Discovery Log Page 00:23:17.751 ================== 00:23:17.751 Generation Counter: 2 00:23:17.751 Number of Records: 2 00:23:17.751 Record Format: 0 00:23:17.751 00:23:17.751 Discovery Log Entry 0 00:23:17.751 ---------------------- 00:23:17.751 Transport Type: 3 (TCP) 00:23:17.751 Address Family: 1 (IPv4) 00:23:17.751 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:17.751 Entry Flags: 00:23:17.751 Duplicate Returned Information: 0 00:23:17.751 Explicit Persistent Connection Support for Discovery: 0 00:23:17.751 Transport Requirements: 00:23:17.751 Secure Channel: Not Specified 00:23:17.751 Port ID: 1 (0x0001) 00:23:17.751 Controller ID: 65535 (0xffff) 00:23:17.751 Admin Max SQ Size: 32 00:23:17.751 Transport Service Identifier: 4420 00:23:17.751 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:17.751 Transport Address: 10.0.0.1 00:23:17.751 Discovery Log Entry 1 00:23:17.751 ---------------------- 00:23:17.751 Transport Type: 3 (TCP) 00:23:17.751 Address Family: 1 (IPv4) 00:23:17.751 Subsystem Type: 2 (NVM Subsystem) 00:23:17.751 Entry Flags: 00:23:17.751 Duplicate Returned Information: 0 00:23:17.752 Explicit Persistent Connection Support for Discovery: 0 00:23:17.752 Transport Requirements: 00:23:17.752 Secure Channel: Not Specified 00:23:17.752 Port ID: 1 (0x0001) 00:23:17.752 Controller ID: 65535 (0xffff) 00:23:17.752 Admin Max SQ Size: 32 00:23:17.752 Transport Service Identifier: 4420 00:23:17.752 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:17.752 Transport Address: 10.0.0.1 00:23:17.752 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:17.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.011 get_feature(0x01) failed 00:23:18.011 get_feature(0x02) failed 00:23:18.011 get_feature(0x04) failed 00:23:18.011 ===================================================== 00:23:18.011 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:18.011 ===================================================== 00:23:18.011 Controller Capabilities/Features 00:23:18.011 ================================ 00:23:18.011 Vendor ID: 0000 00:23:18.011 Subsystem Vendor ID: 0000 00:23:18.011 Serial Number: cfd819aff67f8ee43a40 00:23:18.011 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:18.011 Firmware Version: 6.7.0-68 00:23:18.011 Recommended Arb Burst: 6 00:23:18.011 IEEE OUI Identifier: 00 00 00 00:23:18.011 Multi-path I/O 00:23:18.011 May have multiple subsystem ports: Yes 00:23:18.011 May have multiple controllers: Yes 00:23:18.011 Associated with SR-IOV VF: No 00:23:18.011 Max Data Transfer Size: Unlimited 00:23:18.011 Max Number of Namespaces: 1024 00:23:18.011 Max Number of I/O Queues: 128 00:23:18.011 NVMe Specification Version (VS): 1.3 00:23:18.011 NVMe Specification Version (Identify): 1.3 00:23:18.011 Maximum Queue Entries: 1024 00:23:18.011 Contiguous Queues Required: No 00:23:18.011 Arbitration Mechanisms Supported 00:23:18.011 Weighted Round Robin: Not Supported 00:23:18.011 Vendor Specific: Not Supported 00:23:18.011 Reset Timeout: 7500 ms 00:23:18.011 Doorbell Stride: 4 bytes 00:23:18.011 NVM Subsystem Reset: Not Supported 00:23:18.011 Command Sets Supported 00:23:18.011 NVM Command Set: Supported 00:23:18.011 Boot Partition: Not Supported 00:23:18.011 Memory Page Size Minimum: 4096 bytes 00:23:18.011 Memory Page Size Maximum: 4096 bytes 00:23:18.011 Persistent Memory Region: Not Supported 00:23:18.012 Optional Asynchronous Events Supported 00:23:18.012 Namespace Attribute Notices: Supported 00:23:18.012 Firmware Activation Notices: Not Supported 00:23:18.012 ANA Change Notices: Supported 00:23:18.012 PLE Aggregate Log Change Notices: Not Supported 00:23:18.012 LBA Status Info Alert Notices: Not Supported 00:23:18.012 EGE Aggregate Log Change Notices: Not Supported 00:23:18.012 Normal NVM Subsystem Shutdown event: Not Supported 00:23:18.012 Zone Descriptor Change Notices: Not Supported 00:23:18.012 Discovery Log Change Notices: Not Supported 00:23:18.012 Controller Attributes 00:23:18.012 128-bit Host Identifier: Supported 00:23:18.012 Non-Operational Permissive Mode: Not Supported 00:23:18.012 NVM Sets: Not Supported 00:23:18.012 Read Recovery Levels: Not Supported 00:23:18.012 Endurance Groups: Not Supported 00:23:18.012 Predictable Latency Mode: Not Supported 00:23:18.012 Traffic Based Keep ALive: Supported 00:23:18.012 Namespace Granularity: Not Supported 00:23:18.012 SQ Associations: Not Supported 00:23:18.012 UUID List: Not Supported 00:23:18.012 Multi-Domain Subsystem: Not Supported 00:23:18.012 Fixed Capacity Management: Not Supported 00:23:18.012 Variable Capacity Management: Not Supported 00:23:18.012 Delete Endurance Group: Not Supported 00:23:18.012 Delete NVM Set: Not Supported 00:23:18.012 Extended LBA Formats Supported: Not Supported 00:23:18.012 Flexible Data Placement Supported: Not Supported 00:23:18.012 00:23:18.012 Controller Memory Buffer Support 00:23:18.012 ================================ 00:23:18.012 Supported: No 00:23:18.012 00:23:18.012 Persistent Memory Region Support 00:23:18.012 ================================ 00:23:18.012 Supported: No 00:23:18.012 00:23:18.012 Admin Command Set Attributes 00:23:18.012 ============================ 00:23:18.012 Security Send/Receive: Not Supported 00:23:18.012 Format NVM: Not Supported 00:23:18.012 Firmware Activate/Download: Not Supported 00:23:18.012 Namespace Management: Not Supported 00:23:18.012 Device Self-Test: Not Supported 00:23:18.012 Directives: Not Supported 00:23:18.012 NVMe-MI: Not Supported 00:23:18.012 Virtualization Management: Not Supported 00:23:18.012 Doorbell Buffer Config: Not Supported 00:23:18.012 Get LBA Status Capability: Not Supported 00:23:18.012 Command & Feature Lockdown Capability: Not Supported 00:23:18.012 Abort Command Limit: 4 00:23:18.012 Async Event Request Limit: 4 00:23:18.012 Number of Firmware Slots: N/A 00:23:18.012 Firmware Slot 1 Read-Only: N/A 00:23:18.012 Firmware Activation Without Reset: N/A 00:23:18.012 Multiple Update Detection Support: N/A 00:23:18.012 Firmware Update Granularity: No Information Provided 00:23:18.012 Per-Namespace SMART Log: Yes 00:23:18.012 Asymmetric Namespace Access Log Page: Supported 00:23:18.012 ANA Transition Time : 10 sec 00:23:18.012 00:23:18.012 Asymmetric Namespace Access Capabilities 00:23:18.012 ANA Optimized State : Supported 00:23:18.012 ANA Non-Optimized State : Supported 00:23:18.012 ANA Inaccessible State : Supported 00:23:18.012 ANA Persistent Loss State : Supported 00:23:18.012 ANA Change State : Supported 00:23:18.012 ANAGRPID is not changed : No 00:23:18.012 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:18.012 00:23:18.012 ANA Group Identifier Maximum : 128 00:23:18.012 Number of ANA Group Identifiers : 128 00:23:18.012 Max Number of Allowed Namespaces : 1024 00:23:18.012 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:18.012 Command Effects Log Page: Supported 00:23:18.012 Get Log Page Extended Data: Supported 00:23:18.012 Telemetry Log Pages: Not Supported 00:23:18.012 Persistent Event Log Pages: Not Supported 00:23:18.012 Supported Log Pages Log Page: May Support 00:23:18.012 Commands Supported & Effects Log Page: Not Supported 00:23:18.012 Feature Identifiers & Effects Log Page:May Support 00:23:18.013 NVMe-MI Commands & Effects Log Page: May Support 00:23:18.013 Data Area 4 for Telemetry Log: Not Supported 00:23:18.013 Error Log Page Entries Supported: 128 00:23:18.013 Keep Alive: Supported 00:23:18.013 Keep Alive Granularity: 1000 ms 00:23:18.013 00:23:18.013 NVM Command Set Attributes 00:23:18.013 ========================== 00:23:18.013 Submission Queue Entry Size 00:23:18.013 Max: 64 00:23:18.013 Min: 64 00:23:18.013 Completion Queue Entry Size 00:23:18.013 Max: 16 00:23:18.013 Min: 16 00:23:18.013 Number of Namespaces: 1024 00:23:18.013 Compare Command: Not Supported 00:23:18.013 Write Uncorrectable Command: Not Supported 00:23:18.013 Dataset Management Command: Supported 00:23:18.013 Write Zeroes Command: Supported 00:23:18.013 Set Features Save Field: Not Supported 00:23:18.013 Reservations: Not Supported 00:23:18.013 Timestamp: Not Supported 00:23:18.013 Copy: Not Supported 00:23:18.013 Volatile Write Cache: Present 00:23:18.013 Atomic Write Unit (Normal): 1 00:23:18.013 Atomic Write Unit (PFail): 1 00:23:18.013 Atomic Compare & Write Unit: 1 00:23:18.013 Fused Compare & Write: Not Supported 00:23:18.013 Scatter-Gather List 00:23:18.013 SGL Command Set: Supported 00:23:18.013 SGL Keyed: Not Supported 00:23:18.013 SGL Bit Bucket Descriptor: Not Supported 00:23:18.013 SGL Metadata Pointer: Not Supported 00:23:18.013 Oversized SGL: Not Supported 00:23:18.013 SGL Metadata Address: Not Supported 00:23:18.013 SGL Offset: Supported 00:23:18.013 Transport SGL Data Block: Not Supported 00:23:18.013 Replay Protected Memory Block: Not Supported 00:23:18.013 00:23:18.013 Firmware Slot Information 00:23:18.013 ========================= 00:23:18.013 Active slot: 0 00:23:18.013 00:23:18.013 Asymmetric Namespace Access 00:23:18.013 =========================== 00:23:18.013 Change Count : 0 00:23:18.013 Number of ANA Group Descriptors : 1 00:23:18.013 ANA Group Descriptor : 0 00:23:18.013 ANA Group ID : 1 00:23:18.013 Number of NSID Values : 1 00:23:18.013 Change Count : 0 00:23:18.013 ANA State : 1 00:23:18.013 Namespace Identifier : 1 00:23:18.013 00:23:18.013 Commands Supported and Effects 00:23:18.013 ============================== 00:23:18.013 Admin Commands 00:23:18.013 -------------- 00:23:18.013 Get Log Page (02h): Supported 00:23:18.013 Identify (06h): Supported 00:23:18.013 Abort (08h): Supported 00:23:18.013 Set Features (09h): Supported 00:23:18.013 Get Features (0Ah): Supported 00:23:18.013 Asynchronous Event Request (0Ch): Supported 00:23:18.013 Keep Alive (18h): Supported 00:23:18.013 I/O Commands 00:23:18.013 ------------ 00:23:18.013 Flush (00h): Supported 00:23:18.013 Write (01h): Supported LBA-Change 00:23:18.013 Read (02h): Supported 00:23:18.013 Write Zeroes (08h): Supported LBA-Change 00:23:18.013 Dataset Management (09h): Supported 00:23:18.013 00:23:18.013 Error Log 00:23:18.013 ========= 00:23:18.013 Entry: 0 00:23:18.013 Error Count: 0x3 00:23:18.013 Submission Queue Id: 0x0 00:23:18.013 Command Id: 0x5 00:23:18.013 Phase Bit: 0 00:23:18.013 Status Code: 0x2 00:23:18.013 Status Code Type: 0x0 00:23:18.013 Do Not Retry: 1 00:23:18.013 Error Location: 0x28 00:23:18.013 LBA: 0x0 00:23:18.013 Namespace: 0x0 00:23:18.013 Vendor Log Page: 0x0 00:23:18.013 ----------- 00:23:18.013 Entry: 1 00:23:18.013 Error Count: 0x2 00:23:18.013 Submission Queue Id: 0x0 00:23:18.013 Command Id: 0x5 00:23:18.013 Phase Bit: 0 00:23:18.013 Status Code: 0x2 00:23:18.013 Status Code Type: 0x0 00:23:18.013 Do Not Retry: 1 00:23:18.013 Error Location: 0x28 00:23:18.013 LBA: 0x0 00:23:18.013 Namespace: 0x0 00:23:18.013 Vendor Log Page: 0x0 00:23:18.013 ----------- 00:23:18.013 Entry: 2 00:23:18.014 Error Count: 0x1 00:23:18.014 Submission Queue Id: 0x0 00:23:18.014 Command Id: 0x4 00:23:18.014 Phase Bit: 0 00:23:18.014 Status Code: 0x2 00:23:18.014 Status Code Type: 0x0 00:23:18.014 Do Not Retry: 1 00:23:18.014 Error Location: 0x28 00:23:18.014 LBA: 0x0 00:23:18.014 Namespace: 0x0 00:23:18.014 Vendor Log Page: 0x0 00:23:18.014 00:23:18.014 Number of Queues 00:23:18.014 ================ 00:23:18.014 Number of I/O Submission Queues: 128 00:23:18.014 Number of I/O Completion Queues: 128 00:23:18.014 00:23:18.014 ZNS Specific Controller Data 00:23:18.014 ============================ 00:23:18.014 Zone Append Size Limit: 0 00:23:18.014 00:23:18.014 00:23:18.014 Active Namespaces 00:23:18.014 ================= 00:23:18.014 get_feature(0x05) failed 00:23:18.014 Namespace ID:1 00:23:18.014 Command Set Identifier: NVM (00h) 00:23:18.014 Deallocate: Supported 00:23:18.014 Deallocated/Unwritten Error: Not Supported 00:23:18.014 Deallocated Read Value: Unknown 00:23:18.014 Deallocate in Write Zeroes: Not Supported 00:23:18.014 Deallocated Guard Field: 0xFFFF 00:23:18.014 Flush: Supported 00:23:18.014 Reservation: Not Supported 00:23:18.014 Namespace Sharing Capabilities: Multiple Controllers 00:23:18.014 Size (in LBAs): 1953525168 (931GiB) 00:23:18.014 Capacity (in LBAs): 1953525168 (931GiB) 00:23:18.014 Utilization (in LBAs): 1953525168 (931GiB) 00:23:18.014 UUID: 6c37be25-e64a-485f-b588-c23441e520b3 00:23:18.014 Thin Provisioning: Not Supported 00:23:18.014 Per-NS Atomic Units: Yes 00:23:18.014 Atomic Boundary Size (Normal): 0 00:23:18.014 Atomic Boundary Size (PFail): 0 00:23:18.014 Atomic Boundary Offset: 0 00:23:18.014 NGUID/EUI64 Never Reused: No 00:23:18.014 ANA group ID: 1 00:23:18.014 Namespace Write Protected: No 00:23:18.014 Number of LBA Formats: 1 00:23:18.014 Current LBA Format: LBA Format #00 00:23:18.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:18.014 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.014 rmmod nvme_tcp 00:23:18.014 rmmod nvme_fabrics 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.014 23:26:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:19.913 23:26:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:21.283 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.283 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.283 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:22.218 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:22.218 00:23:22.218 real 0m9.346s 00:23:22.218 user 0m1.911s 00:23:22.218 sys 0m3.440s 00:23:22.218 23:26:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.218 23:26:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.218 ************************************ 00:23:22.218 END TEST nvmf_identify_kernel_target 00:23:22.218 ************************************ 00:23:22.218 23:26:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:22.218 23:26:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.478 23:26:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.478 23:26:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.478 23:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.478 ************************************ 00:23:22.478 START TEST nvmf_auth_host 00:23:22.478 ************************************ 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.478 * Looking for test storage... 00:23:22.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.478 23:26:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.377 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:24.378 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:24.378 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:24.378 Found net devices under 0000:84:00.0: cvl_0_0 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:24.378 Found net devices under 0000:84:00.1: cvl_0_1 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:23:24.378 00:23:24.378 --- 10.0.0.2 ping statistics --- 00:23:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.378 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:24.378 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:23:24.378 00:23:24.378 --- 10.0.0.1 ping statistics --- 00:23:24.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.379 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2425629 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2425629 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2425629 ']' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.379 23:26:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=955d76bb3fe281974a9f17c35143b05c 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h5P 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 955d76bb3fe281974a9f17c35143b05c 0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 955d76bb3fe281974a9f17c35143b05c 0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=955d76bb3fe281974a9f17c35143b05c 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h5P 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h5P 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.h5P 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e6e59a7605ef55f8e57530f17b300f187689ed7cc53c29e47f35e198e4d49cd 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ELu 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e6e59a7605ef55f8e57530f17b300f187689ed7cc53c29e47f35e198e4d49cd 3 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e6e59a7605ef55f8e57530f17b300f187689ed7cc53c29e47f35e198e4d49cd 3 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e6e59a7605ef55f8e57530f17b300f187689ed7cc53c29e47f35e198e4d49cd 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ELu 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ELu 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ELu 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2830ac7f8164500d1254812ed01a4389d0ab32d5dc6d7dfd 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZcZ 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2830ac7f8164500d1254812ed01a4389d0ab32d5dc6d7dfd 0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2830ac7f8164500d1254812ed01a4389d0ab32d5dc6d7dfd 0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2830ac7f8164500d1254812ed01a4389d0ab32d5dc6d7dfd 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZcZ 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZcZ 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZcZ 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a25d8f569a11c6d516cf7a29cd5c80fdaa7483d39bf2a21e 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EWF 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a25d8f569a11c6d516cf7a29cd5c80fdaa7483d39bf2a21e 2 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a25d8f569a11c6d516cf7a29cd5c80fdaa7483d39bf2a21e 2 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a25d8f569a11c6d516cf7a29cd5c80fdaa7483d39bf2a21e 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EWF 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EWF 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.EWF 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8483a35e3023542215a684aea94d9819 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EWp 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8483a35e3023542215a684aea94d9819 1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8483a35e3023542215a684aea94d9819 1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8483a35e3023542215a684aea94d9819 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EWp 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EWp 00:23:25.750 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EWp 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd57e7f98fcb7d768fd96ef9494801e6 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T3d 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd57e7f98fcb7d768fd96ef9494801e6 1 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd57e7f98fcb7d768fd96ef9494801e6 1 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd57e7f98fcb7d768fd96ef9494801e6 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T3d 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T3d 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T3d 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c22b98f7f28edba8f0cfa81d2f78b2a61eea8c901c9a13e 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qbO 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c22b98f7f28edba8f0cfa81d2f78b2a61eea8c901c9a13e 2 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c22b98f7f28edba8f0cfa81d2f78b2a61eea8c901c9a13e 2 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c22b98f7f28edba8f0cfa81d2f78b2a61eea8c901c9a13e 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.751 23:26:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qbO 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qbO 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qbO 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f311bbbda1d194b826ac79546880f3e4 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7mH 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f311bbbda1d194b826ac79546880f3e4 0 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f311bbbda1d194b826ac79546880f3e4 0 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f311bbbda1d194b826ac79546880f3e4 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.751 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:26.008 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7mH 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7mH 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7mH 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=405766ff6dcd79214c7679f060a7c1dc34a5981a72756338b26155bd42778320 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KYK 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 405766ff6dcd79214c7679f060a7c1dc34a5981a72756338b26155bd42778320 3 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 405766ff6dcd79214c7679f060a7c1dc34a5981a72756338b26155bd42778320 3 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=405766ff6dcd79214c7679f060a7c1dc34a5981a72756338b26155bd42778320 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KYK 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KYK 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KYK 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2425629 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2425629 ']' 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.009 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.269 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h5P 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ELu ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ELu 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZcZ 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.EWF ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EWF 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EWp 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T3d ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T3d 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qbO 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7mH ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7mH 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KYK 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:26.270 23:26:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:27.202 Waiting for block devices as requested 00:23:27.458 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:27.458 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:27.715 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:27.715 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:27.715 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:27.972 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:27.972 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:27.972 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:27.972 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:27.972 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:28.228 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:28.228 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:28.228 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:28.485 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:28.485 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:28.485 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:28.485 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:29.050 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:29.050 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:29.050 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:29.050 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:29.051 No valid GPT data, bailing 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:29.051 00:23:29.051 Discovery Log Number of Records 2, Generation counter 2 00:23:29.051 =====Discovery Log Entry 0====== 00:23:29.051 trtype: tcp 00:23:29.051 adrfam: ipv4 00:23:29.051 subtype: current discovery subsystem 00:23:29.051 treq: not specified, sq flow control disable supported 00:23:29.051 portid: 1 00:23:29.051 trsvcid: 4420 00:23:29.051 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:29.051 traddr: 10.0.0.1 00:23:29.051 eflags: none 00:23:29.051 sectype: none 00:23:29.051 =====Discovery Log Entry 1====== 00:23:29.051 trtype: tcp 00:23:29.051 adrfam: ipv4 00:23:29.051 subtype: nvme subsystem 00:23:29.051 treq: not specified, sq flow control disable supported 00:23:29.051 portid: 1 00:23:29.051 trsvcid: 4420 00:23:29.051 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:29.051 traddr: 10.0.0.1 00:23:29.051 eflags: none 00:23:29.051 sectype: none 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:29.051 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:29.308 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:29.308 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.308 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.308 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.308 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 nvme0n1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.566 nvme0n1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 nvme0n1 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 23:26:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.823 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.080 nvme0n1 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.080 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.337 nvme0n1 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.337 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.594 nvme0n1 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.594 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.595 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.852 nvme0n1 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.852 23:26:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:30.852 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.853 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 nvme0n1 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.111 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.112 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.112 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.112 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.112 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.112 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.369 nvme0n1 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.369 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.370 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.628 nvme0n1 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.628 23:26:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.884 nvme0n1 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:31.884 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.885 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 nvme0n1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.449 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.707 nvme0n1 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.707 23:26:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.965 nvme0n1 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.965 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.224 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.482 nvme0n1 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.482 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.483 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.740 nvme0n1 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.741 23:26:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.741 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.675 nvme0n1 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.675 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.676 23:26:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.300 nvme0n1 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.301 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.925 nvme0n1 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.925 23:26:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.925 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.925 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 nvme0n1 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.516 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.517 23:26:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.082 nvme0n1 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.082 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.083 23:26:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.017 nvme0n1 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.017 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.274 23:26:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.206 nvme0n1 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.206 23:26:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.207 23:26:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.207 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.207 23:26:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.139 nvme0n1 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.139 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.398 23:26:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.332 nvme0n1 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.332 23:26:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.265 nvme0n1 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:42.265 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.266 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 nvme0n1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.524 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.782 nvme0n1 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.782 23:26:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.782 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 nvme0n1 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.041 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.299 nvme0n1 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.299 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.300 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 nvme0n1 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 nvme0n1 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:43.814 23:26:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.814 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.071 nvme0n1 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.071 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.329 nvme0n1 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.329 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.586 nvme0n1 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.586 23:26:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.844 nvme0n1 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.844 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.409 nvme0n1 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:45.409 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.410 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 nvme0n1 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.668 23:27:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.926 nvme0n1 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.926 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:46.184 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.185 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.443 nvme0n1 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.443 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 nvme0n1 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.701 23:27:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.266 nvme0n1 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.266 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.524 23:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.090 nvme0n1 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.090 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.656 nvme0n1 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.656 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.657 23:27:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.220 nvme0n1 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.220 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.221 23:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.785 nvme0n1 00:23:49.785 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.785 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.785 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.785 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.785 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.043 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.043 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.043 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.043 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.043 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.044 23:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.977 nvme0n1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.977 23:27:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 nvme0n1 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.908 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.165 23:27:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.097 nvme0n1 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.097 23:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.030 nvme0n1 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.030 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.031 23:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 nvme0n1 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 nvme0n1 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.404 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.405 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.662 nvme0n1 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.662 23:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.920 nvme0n1 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.920 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.921 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.199 nvme0n1 00:23:56.199 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.199 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.200 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 nvme0n1 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.489 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.746 nvme0n1 00:23:56.746 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.746 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.746 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 23:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.004 nvme0n1 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:57.004 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.005 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.263 nvme0n1 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.263 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.521 nvme0n1 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.521 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.522 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.780 nvme0n1 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.780 23:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.780 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.038 nvme0n1 00:23:58.038 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.038 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.038 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.038 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.038 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.552 nvme0n1 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:58.552 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.553 23:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.117 nvme0n1 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.117 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.374 nvme0n1 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.374 23:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.939 nvme0n1 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.939 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.940 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.504 nvme0n1 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:00.504 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.505 23:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.070 nvme0n1 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.070 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.635 nvme0n1 00:24:01.635 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.635 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.635 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.635 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.635 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.892 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.892 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.892 23:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.892 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.892 23:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.892 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.458 nvme0n1 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.458 23:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.024 nvme0n1 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU1ZDc2YmIzZmUyODE5NzRhOWYxN2MzNTE0M2IwNWNEzqNT: 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: ]] 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU2ZTU5YTc2MDVlZjU1ZjhlNTc1MzBmMTdiMzAwZjE4NzY4OWVkN2NjNTNjMjllNDdmMzVlMTk4ZTRkNDljZB7gIto=: 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.024 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.282 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.283 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.283 23:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.283 23:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.283 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.283 23:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.216 nvme0n1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.216 23:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.151 nvme0n1 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.151 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ4M2EzNWUzMDIzNTQyMjE1YTY4NGFlYTk0ZDk4MTlkmNtZ: 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ1N2U3Zjk4ZmNiN2Q3NjhmZDk2ZWY5NDk0ODAxZTZnzRsA: 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.409 23:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.343 nvme0n1 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGMyMmI5OGY3ZjI4ZWRiYThmMGNmYTgxZDJmNzhiMmE2MWVlYThjOTAxYzlhMTNlmI+AFw==: 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjMxMWJiYmRhMWQxOTRiODI2YWM3OTU0Njg4MGYzZTT4mE5V: 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.343 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.344 23:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.715 nvme0n1 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1NzY2ZmY2ZGNkNzkyMTRjNzY3OWYwNjBhN2MxZGMzNGE1OTgxYTcyNzU2MzM4YjI2MTU1YmQ0Mjc3ODMyMKIAFXU=: 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.715 23:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 nvme0n1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgzMGFjN2Y4MTY0NTAwZDEyNTQ4MTJlZDAxYTQzODlkMGFiMzJkNWRjNmQ3ZGZkIOlWzA==: 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTI1ZDhmNTY5YTExYzZkNTE2Y2Y3YTI5Y2Q1YzgwZmRhYTc0ODNkMzliZjJhMjFl9H9yHA==: 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 request: 00:24:08.645 { 00:24:08.645 "name": "nvme0", 00:24:08.645 "trtype": "tcp", 00:24:08.645 "traddr": "10.0.0.1", 00:24:08.645 "adrfam": "ipv4", 00:24:08.645 "trsvcid": "4420", 00:24:08.645 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.645 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.645 "prchk_reftag": false, 00:24:08.645 "prchk_guard": false, 00:24:08.645 "hdgst": false, 00:24:08.645 "ddgst": false, 00:24:08.645 "method": "bdev_nvme_attach_controller", 00:24:08.645 "req_id": 1 00:24:08.645 } 00:24:08.645 Got JSON-RPC error response 00:24:08.645 response: 00:24:08.645 { 00:24:08.645 "code": -5, 00:24:08.645 "message": "Input/output error" 00:24:08.645 } 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:08.645 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.646 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:08.646 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.646 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.646 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.646 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.646 request: 00:24:08.646 { 00:24:08.646 "name": "nvme0", 00:24:08.646 "trtype": "tcp", 00:24:08.646 "traddr": "10.0.0.1", 00:24:08.646 "adrfam": "ipv4", 00:24:08.646 "trsvcid": "4420", 00:24:08.646 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.646 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.646 "prchk_reftag": false, 00:24:08.646 "prchk_guard": false, 00:24:08.646 "hdgst": false, 00:24:08.646 "ddgst": false, 00:24:08.646 "dhchap_key": "key2", 00:24:08.646 "method": "bdev_nvme_attach_controller", 00:24:08.646 "req_id": 1 00:24:08.646 } 00:24:08.646 Got JSON-RPC error response 00:24:08.646 response: 00:24:08.646 { 00:24:08.646 "code": -5, 00:24:08.646 "message": "Input/output error" 00:24:08.646 } 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.903 23:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.903 request: 00:24:08.903 { 00:24:08.903 "name": "nvme0", 00:24:08.903 "trtype": "tcp", 00:24:08.903 "traddr": "10.0.0.1", 00:24:08.903 "adrfam": "ipv4", 00:24:08.903 "trsvcid": "4420", 00:24:08.903 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.903 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.903 "prchk_reftag": false, 00:24:08.903 "prchk_guard": false, 00:24:08.903 "hdgst": false, 00:24:08.903 "ddgst": false, 00:24:08.903 "dhchap_key": "key1", 00:24:08.903 "dhchap_ctrlr_key": "ckey2", 00:24:08.903 "method": "bdev_nvme_attach_controller", 00:24:08.903 "req_id": 1 00:24:08.903 } 00:24:08.903 Got JSON-RPC error response 00:24:08.903 response: 00:24:08.903 { 00:24:08.903 "code": -5, 00:24:08.903 "message": "Input/output error" 00:24:08.903 } 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.903 rmmod nvme_tcp 00:24:08.903 rmmod nvme_fabrics 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2425629 ']' 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2425629 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2425629 ']' 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2425629 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2425629 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2425629' 00:24:08.903 killing process with pid 2425629 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2425629 00:24:08.903 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2425629 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.162 23:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:11.697 23:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:12.633 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:12.633 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:12.633 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:13.568 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:13.826 23:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h5P /tmp/spdk.key-null.ZcZ /tmp/spdk.key-sha256.EWp /tmp/spdk.key-sha384.qbO /tmp/spdk.key-sha512.KYK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:13.826 23:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:14.760 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:14.760 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:14.760 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:14.760 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:14.760 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:14.760 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:14.760 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:14.760 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:14.760 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:14.760 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:14.760 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:14.760 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:15.017 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:15.017 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:15.017 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:15.017 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:15.017 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:15.017 00:24:15.017 real 0m52.687s 00:24:15.017 user 0m50.349s 00:24:15.017 sys 0m5.981s 00:24:15.017 23:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.017 23:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.017 ************************************ 00:24:15.017 END TEST nvmf_auth_host 00:24:15.017 ************************************ 00:24:15.017 23:27:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:15.017 23:27:30 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:15.017 23:27:30 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:15.017 23:27:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:15.017 23:27:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.017 23:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.017 ************************************ 00:24:15.017 START TEST nvmf_digest 00:24:15.017 ************************************ 00:24:15.017 23:27:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:15.317 * Looking for test storage... 00:24:15.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.317 23:27:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:17.218 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:17.218 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:17.218 Found net devices under 0000:84:00.0: cvl_0_0 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:17.218 Found net devices under 0000:84:00.1: cvl_0_1 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:17.218 00:24:17.218 --- 10.0.0.2 ping statistics --- 00:24:17.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.218 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:17.218 00:24:17.218 --- 10.0.0.1 ping statistics --- 00:24:17.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.218 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.218 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.219 ************************************ 00:24:17.219 START TEST nvmf_digest_clean 00:24:17.219 ************************************ 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2436231 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2436231 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2436231 ']' 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.219 23:27:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.219 [2024-07-15 23:27:32.501541] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:17.219 [2024-07-15 23:27:32.501636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.477 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.477 [2024-07-15 23:27:32.572230] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.477 [2024-07-15 23:27:32.691431] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.477 [2024-07-15 23:27:32.691510] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.477 [2024-07-15 23:27:32.691528] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.477 [2024-07-15 23:27:32.691542] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.477 [2024-07-15 23:27:32.691555] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.477 [2024-07-15 23:27:32.691587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.410 null0 00:24:18.410 [2024-07-15 23:27:33.628899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.410 [2024-07-15 23:27:33.653122] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2436384 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2436384 /var/tmp/bperf.sock 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2436384 ']' 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:18.410 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.410 [2024-07-15 23:27:33.702312] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:18.410 [2024-07-15 23:27:33.702395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436384 ] 00:24:18.667 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.667 [2024-07-15 23:27:33.761856] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.667 [2024-07-15 23:27:33.871635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.667 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.667 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:18.667 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:18.667 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.667 23:27:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:19.231 23:27:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.231 23:27:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.535 nvme0n1 00:24:19.535 23:27:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:19.535 23:27:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.826 Running I/O for 2 seconds... 00:24:21.722 00:24:21.722 Latency(us) 00:24:21.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:21.722 nvme0n1 : 2.00 18782.82 73.37 0.00 0.00 6806.60 3106.89 15825.73 00:24:21.722 =================================================================================================================== 00:24:21.722 Total : 18782.82 73.37 0.00 0.00 6806.60 3106.89 15825.73 00:24:21.722 0 00:24:21.722 23:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:21.722 23:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:21.722 23:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:21.722 23:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:21.722 | select(.opcode=="crc32c") 00:24:21.722 | "\(.module_name) \(.executed)"' 00:24:21.722 23:27:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2436384 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2436384 ']' 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2436384 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436384 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436384' 00:24:21.980 killing process with pid 2436384 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2436384 00:24:21.980 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.980 00:24:21.980 Latency(us) 00:24:21.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.980 =================================================================================================================== 00:24:21.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.980 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2436384 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2436801 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2436801 /var/tmp/bperf.sock 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2436801 ']' 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:22.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.237 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 [2024-07-15 23:27:37.513094] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:22.237 [2024-07-15 23:27:37.513184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436801 ] 00:24:22.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.237 Zero copy mechanism will not be used. 00:24:22.237 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.495 [2024-07-15 23:27:37.590208] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.495 [2024-07-15 23:27:37.747278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.752 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.752 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:22.752 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:22.752 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:22.752 23:27:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:23.008 23:27:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.008 23:27:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.572 nvme0n1 00:24:23.572 23:27:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:23.572 23:27:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:23.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.572 Zero copy mechanism will not be used. 00:24:23.572 Running I/O for 2 seconds... 00:24:26.098 00:24:26.098 Latency(us) 00:24:26.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:26.098 nvme0n1 : 2.01 2907.35 363.42 0.00 0.00 5498.85 855.61 13204.29 00:24:26.098 =================================================================================================================== 00:24:26.098 Total : 2907.35 363.42 0.00 0.00 5498.85 855.61 13204.29 00:24:26.098 0 00:24:26.098 23:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:26.098 23:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:26.098 23:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:26.098 23:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:26.098 23:27:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:26.098 | select(.opcode=="crc32c") 00:24:26.098 | "\(.module_name) \(.executed)"' 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2436801 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2436801 ']' 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2436801 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436801 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436801' 00:24:26.098 killing process with pid 2436801 00:24:26.098 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2436801 00:24:26.098 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.098 00:24:26.098 Latency(us) 00:24:26.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.098 =================================================================================================================== 00:24:26.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2436801 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2437230 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2437230 /var/tmp/bperf.sock 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2437230 ']' 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:26.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.099 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:26.099 [2024-07-15 23:27:41.399553] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:26.099 [2024-07-15 23:27:41.399655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437230 ] 00:24:26.357 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.357 [2024-07-15 23:27:41.466312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.357 [2024-07-15 23:27:41.583411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.357 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.357 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:26.357 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:26.357 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:26.357 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:26.923 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.923 23:27:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:27.181 nvme0n1 00:24:27.181 23:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:27.181 23:27:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:27.181 Running I/O for 2 seconds... 00:24:29.710 00:24:29.710 Latency(us) 00:24:29.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:29.710 nvme0n1 : 2.00 21600.62 84.38 0.00 0.00 5916.79 2791.35 13689.74 00:24:29.710 =================================================================================================================== 00:24:29.710 Total : 21600.62 84.38 0.00 0.00 5916.79 2791.35 13689.74 00:24:29.710 0 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:29.710 | select(.opcode=="crc32c") 00:24:29.710 | "\(.module_name) \(.executed)"' 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2437230 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2437230 ']' 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2437230 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2437230 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2437230' 00:24:29.710 killing process with pid 2437230 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2437230 00:24:29.710 Received shutdown signal, test time was about 2.000000 seconds 00:24:29.710 00:24:29.710 Latency(us) 00:24:29.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.710 =================================================================================================================== 00:24:29.710 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.710 23:27:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2437230 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2437733 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2437733 /var/tmp/bperf.sock 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2437733 ']' 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.969 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:29.969 [2024-07-15 23:27:45.099832] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:29.969 [2024-07-15 23:27:45.099923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437733 ] 00:24:29.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.969 Zero copy mechanism will not be used. 00:24:29.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.969 [2024-07-15 23:27:45.158242] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.969 [2024-07-15 23:27:45.267343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.227 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.227 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:30.227 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:30.227 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:30.227 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:30.485 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.485 23:27:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.743 nvme0n1 00:24:30.743 23:27:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:30.743 23:27:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:31.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:31.001 Zero copy mechanism will not be used. 00:24:31.001 Running I/O for 2 seconds... 00:24:32.896 00:24:32.896 Latency(us) 00:24:32.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.896 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:32.896 nvme0n1 : 2.00 4318.88 539.86 0.00 0.00 3696.37 2694.26 7475.96 00:24:32.896 =================================================================================================================== 00:24:32.896 Total : 4318.88 539.86 0.00 0.00 3696.37 2694.26 7475.96 00:24:32.896 0 00:24:32.896 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:32.896 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:32.896 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:32.896 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:32.896 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:32.896 | select(.opcode=="crc32c") 00:24:32.896 | "\(.module_name) \(.executed)"' 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2437733 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2437733 ']' 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2437733 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.153 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2437733 00:24:33.410 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.410 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.410 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2437733' 00:24:33.410 killing process with pid 2437733 00:24:33.410 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2437733 00:24:33.410 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.410 00:24:33.410 Latency(us) 00:24:33.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.410 =================================================================================================================== 00:24:33.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.410 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2437733 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2436231 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2436231 ']' 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2436231 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436231 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436231' 00:24:33.667 killing process with pid 2436231 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2436231 00:24:33.667 23:27:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2436231 00:24:33.924 00:24:33.924 real 0m16.593s 00:24:33.924 user 0m32.072s 00:24:33.924 sys 0m4.712s 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.924 ************************************ 00:24:33.924 END TEST nvmf_digest_clean 00:24:33.924 ************************************ 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.924 ************************************ 00:24:33.924 START TEST nvmf_digest_error 00:24:33.924 ************************************ 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2438173 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2438173 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2438173 ']' 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.924 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.924 [2024-07-15 23:27:49.142420] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:33.924 [2024-07-15 23:27:49.142497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.924 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.924 [2024-07-15 23:27:49.219001] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.181 [2024-07-15 23:27:49.355949] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.181 [2024-07-15 23:27:49.356007] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.181 [2024-07-15 23:27:49.356054] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.181 [2024-07-15 23:27:49.356076] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.181 [2024-07-15 23:27:49.356103] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.181 [2024-07-15 23:27:49.356140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.181 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.182 [2024-07-15 23:27:49.460926] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.182 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.439 null0 00:24:34.439 [2024-07-15 23:27:49.566591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.439 [2024-07-15 23:27:49.590822] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2438315 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2438315 /var/tmp/bperf.sock 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2438315 ']' 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.439 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.439 [2024-07-15 23:27:49.634904] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:34.439 [2024-07-15 23:27:49.634966] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438315 ] 00:24:34.439 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.439 [2024-07-15 23:27:49.691504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.697 [2024-07-15 23:27:49.798390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.697 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.697 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:34.697 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.697 23:27:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.955 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.520 nvme0n1 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:35.520 23:27:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.520 Running I/O for 2 seconds... 00:24:35.520 [2024-07-15 23:27:50.798235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.520 [2024-07-15 23:27:50.798289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.520 [2024-07-15 23:27:50.798312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.520 [2024-07-15 23:27:50.813899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.520 [2024-07-15 23:27:50.813929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.520 [2024-07-15 23:27:50.813946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.520 [2024-07-15 23:27:50.825674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.520 [2024-07-15 23:27:50.825710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.520 [2024-07-15 23:27:50.825730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.839923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.839952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.839968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.855643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.855679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.855699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.868431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.868467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.868487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.880529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.880565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.880584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.893660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.893695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.893725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.910948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.910977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.923163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.923199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.923220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.938217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.938254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.938275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.952124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.952160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.952179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.965264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.965300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.965320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.981620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.981655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.981676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:50.994912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:50.994941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:50.994958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.778 [2024-07-15 23:27:51.006812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.778 [2024-07-15 23:27:51.006841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.778 [2024-07-15 23:27:51.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 23:27:51.023780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.779 [2024-07-15 23:27:51.023826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.779 [2024-07-15 23:27:51.023853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 23:27:51.039085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.779 [2024-07-15 23:27:51.039128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.779 [2024-07-15 23:27:51.039149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 23:27:51.051229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.779 [2024-07-15 23:27:51.051263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.779 [2024-07-15 23:27:51.051283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 23:27:51.069424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.779 [2024-07-15 23:27:51.069458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.779 [2024-07-15 23:27:51.069477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 23:27:51.081522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:35.779 [2024-07-15 23:27:51.081556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.779 [2024-07-15 23:27:51.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.096118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.096153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.096173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.108448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.108504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.121880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.121910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.121927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.139972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.140001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.140033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.151553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.151588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.151616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.167619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.167654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.167674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.183691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.183726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.183756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.196385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.196421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.196441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.213703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.213780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.225194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.225228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.225257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.240394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.240429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.240448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.255078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.255113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.255133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.267293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.267327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.267347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.282639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.282675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.282701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.294348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.294383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.294403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.308186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.308221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.308241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.320067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.320101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.320120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.332706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.332748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.332783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.037 [2024-07-15 23:27:51.348998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.037 [2024-07-15 23:27:51.349027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.037 [2024-07-15 23:27:51.349043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.364881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.364910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.376080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.376110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.376155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.389993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.390021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.390053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.405792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.405822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.417179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.417219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.417239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.433781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.433817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.433833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.451580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.451635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.468781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.468810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.468826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.482510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.482547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.495319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.495354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.495373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.508120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.508156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.508175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.524465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.524525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.536902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.536932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.536948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.549449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.549483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.549503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.562129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.562164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.562183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.576860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.576891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.576908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.589471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.589511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.589530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.296 [2024-07-15 23:27:51.602386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.296 [2024-07-15 23:27:51.602422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.296 [2024-07-15 23:27:51.602441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.616992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.617022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.617038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.632553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.632589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.632608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.645417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.645458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.645479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.661437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.661472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.677501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.677536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.677556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.690499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.690533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.690553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.704635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.704671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.704690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.720504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.720538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.720558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.732558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.732592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.732612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.746472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.746527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.760853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.760882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.760899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.772840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.772869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.787602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.787637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.787657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.802445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.815459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.815494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.815513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.828621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.828655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.828675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.841531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.841566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.841586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.855582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.855618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.855638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.555 [2024-07-15 23:27:51.867542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.555 [2024-07-15 23:27:51.867577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.555 [2024-07-15 23:27:51.867597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.882850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.882879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.882901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.896467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.896502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.896522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.907362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.907397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.907417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.921623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.921659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.921679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.937118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.937153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.937172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.950009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.950038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.950070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.963289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.963324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.963344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.978752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.978814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:51.990443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:51.990478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:51.990497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.004084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.004124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.004145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.017962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.017991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.018007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.032032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.032076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.032092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.044978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.058931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.058961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.058977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.072634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.072688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.083832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.083861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.083877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.098719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.098763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.098797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.110844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.110872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.110888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.814 [2024-07-15 23:27:52.124935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:36.814 [2024-07-15 23:27:52.124967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.814 [2024-07-15 23:27:52.124984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.140517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.140552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.152597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.152632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.152652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.166818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.166847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.179924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.179953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.179970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.194646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.194681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.194701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.206836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.206865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.206881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.221485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.221520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.235143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.235179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.248355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.248390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.248410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.260757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.260803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.260819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.274727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.274786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.274803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.286662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.286697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.286717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.300514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.300549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.300568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.314361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.314396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.314415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.326825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.326854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.326871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.343095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.343131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.343151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.359667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.359703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.359722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.072 [2024-07-15 23:27:52.376908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.072 [2024-07-15 23:27:52.376936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.072 [2024-07-15 23:27:52.376952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.330 [2024-07-15 23:27:52.392850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.330 [2024-07-15 23:27:52.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.330 [2024-07-15 23:27:52.392897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.330 [2024-07-15 23:27:52.405414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.330 [2024-07-15 23:27:52.405449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.330 [2024-07-15 23:27:52.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.330 [2024-07-15 23:27:52.420958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.330 [2024-07-15 23:27:52.420987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.330 [2024-07-15 23:27:52.421004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.330 [2024-07-15 23:27:52.433067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.330 [2024-07-15 23:27:52.433119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.433139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.446547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.446583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.446604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.459927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.459958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.459976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.473857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.473887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.473910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.487063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.487110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.487129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.500831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.500862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.500878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.514419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.514453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.532183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.532217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.532237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.547413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.547448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.547468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.559435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.559470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.559490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.576160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.576195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.588452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.588487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.588505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.602039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.602072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.602105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.615342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.615376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.629273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.629308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.629327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.331 [2024-07-15 23:27:52.643513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.331 [2024-07-15 23:27:52.643549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.331 [2024-07-15 23:27:52.643569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.655022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.655051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.655067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.669220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.669274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.686395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.686440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.686459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.703779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.703842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.721521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.721565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.721584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.732855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.732886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.732903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.749058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.749108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.749128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.763170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.763215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.763235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 [2024-07-15 23:27:52.776904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb3380) 00:24:37.589 [2024-07-15 23:27:52.776948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.589 [2024-07-15 23:27:52.776965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.589 00:24:37.589 Latency(us) 00:24:37.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.589 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:37.589 nvme0n1 : 2.01 18152.42 70.91 0.00 0.00 7042.44 3737.98 26020.22 00:24:37.589 =================================================================================================================== 00:24:37.589 Total : 18152.42 70.91 0.00 0.00 7042.44 3737.98 26020.22 00:24:37.589 0 00:24:37.589 23:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:37.589 23:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:37.589 23:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:37.589 23:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:37.590 | .driver_specific 00:24:37.590 | .nvme_error 00:24:37.590 | .status_code 00:24:37.590 | .command_transient_transport_error' 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2438315 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2438315 ']' 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2438315 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438315 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438315' 00:24:37.847 killing process with pid 2438315 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2438315 00:24:37.847 Received shutdown signal, test time was about 2.000000 seconds 00:24:37.847 00:24:37.847 Latency(us) 00:24:37.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.847 =================================================================================================================== 00:24:37.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.847 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2438315 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2438719 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2438719 /var/tmp/bperf.sock 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2438719 ']' 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:38.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.103 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.103 [2024-07-15 23:27:53.365764] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:38.103 [2024-07-15 23:27:53.365842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438719 ] 00:24:38.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:38.104 Zero copy mechanism will not be used. 00:24:38.104 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.360 [2024-07-15 23:27:53.428172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.360 [2024-07-15 23:27:53.542877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.360 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.360 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:38.360 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.360 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.617 23:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.181 nvme0n1 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:39.181 23:27:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.181 Zero copy mechanism will not be used. 00:24:39.181 Running I/O for 2 seconds... 00:24:39.181 [2024-07-15 23:27:54.365833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.365912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.373445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.373475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.381743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.381771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.381803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.390368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.390395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.390425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.399779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.399822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.399840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.411864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.411894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.411934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.424765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.424794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.424826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.437706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.437757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.437774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.450648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.450676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.450706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.463556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.463584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.463616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.181 [2024-07-15 23:27:54.476439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.181 [2024-07-15 23:27:54.476467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.181 [2024-07-15 23:27:54.476497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.182 [2024-07-15 23:27:54.489204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.182 [2024-07-15 23:27:54.489232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.182 [2024-07-15 23:27:54.489262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.502231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.502290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.515391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.515418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.515450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.528460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.528504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.528521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.541345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.541374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.541405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.554496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.567782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.567811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.567843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.580616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.580643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.580674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.590039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.590069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.590085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.599955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.599985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.600002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.611293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.611320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.611351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.622443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.622474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.622511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.634493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.634521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.646377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.646406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.646438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.654806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.654836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.662508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.662568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.670409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.670438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.670469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.678761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.678805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.678822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.687358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.687386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.687417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.697425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.697485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.707361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.707396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.707427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.716807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.716836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.716867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.725643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.725671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.725702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.734311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.734371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.742619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.742648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.742680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.440 [2024-07-15 23:27:54.750115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.440 [2024-07-15 23:27:54.750144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.440 [2024-07-15 23:27:54.750175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.758978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.759010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.759045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.768991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.769035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.769051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.779345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.779373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.779405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.789442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.789469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.789501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.798864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.798892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.798923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.809487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.809514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.809544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.819882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.819916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.819947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.831133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.831169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.831199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.842996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.843024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.843063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.854934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.854972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.855003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.866539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.866605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.879141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.879174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.879205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.891690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.891757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.904187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.904213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.904244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.917353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.699 [2024-07-15 23:27:54.917379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.699 [2024-07-15 23:27:54.917409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.699 [2024-07-15 23:27:54.930546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.930573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.943366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.943410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.943427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.952669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.952697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.952728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.961489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.961531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.961548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.969695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.969736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.969762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.978379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.978407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.978438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.987332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.987360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.987391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:54.995792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:54.995820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:54.995851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:55.004368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:55.004396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:55.004427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.700 [2024-07-15 23:27:55.013497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.700 [2024-07-15 23:27:55.013529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.700 [2024-07-15 23:27:55.013546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.022441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.022470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.022502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.031267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.031296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.031328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.040257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.040287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.040319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.048624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.048651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.048690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.056540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.056567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.056598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.064520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.064548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.064579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.072542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.072569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.072600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.080808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.080836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.080867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.088710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.088759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.088777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.096871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.096898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.096928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.105118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.105146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.105176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.114347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.114374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.114403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.124526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.124590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.135695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.135747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.135766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.147324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.158648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.158675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.158706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.170451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.170478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.170509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.182521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.182554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.182585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.194646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.194678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.194709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.207281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.207307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.219638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.219670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.219701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.232501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.232534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.232565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.245088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.245116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.245147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.256215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.256248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.256267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.959 [2024-07-15 23:27:55.267966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:39.959 [2024-07-15 23:27:55.267993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.959 [2024-07-15 23:27:55.268023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.217 [2024-07-15 23:27:55.280657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.217 [2024-07-15 23:27:55.280691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.217 [2024-07-15 23:27:55.280711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.217 [2024-07-15 23:27:55.292682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.217 [2024-07-15 23:27:55.292714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.217 [2024-07-15 23:27:55.292733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.304871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.304897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.304928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.317448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.317480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.317499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.330139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.330178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.342905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.342932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.342948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.355645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.355679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.368519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.368551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.368571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.381532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.381564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.381583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.394535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.394568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.394586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.407937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.407965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.407996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.421873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.421901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.421931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.435435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.435486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.449009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.449050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.462686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.462718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.462744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.476192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.476226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.476245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.489873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.489899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.489930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.503442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.503475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.503493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.517908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.517935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.517965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.218 [2024-07-15 23:27:55.530875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.218 [2024-07-15 23:27:55.530918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.218 [2024-07-15 23:27:55.530934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.544854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.544881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.544911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.558227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.558260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.558285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.571616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.571649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.571667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.584946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.584972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.585003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.598317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.598370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.608597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.608631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.608649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.620487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.620522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.620541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.633426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.633462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.633482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.646242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.646275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.646294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.660036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.660064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.660094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.673650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.673722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.687168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.687208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.700682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.700717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.714190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.714222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.727873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.727899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.727930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.741572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.741604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.741623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.755018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.755064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.755083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.768632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.768665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.768684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.476 [2024-07-15 23:27:55.782002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.476 [2024-07-15 23:27:55.782029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.476 [2024-07-15 23:27:55.782064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.795862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.795889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.795920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.805282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.805334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.813590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.813623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.813643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.821464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.821497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.821516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.829257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.829283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.829313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.837088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.837131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.846877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.846935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.857713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.857757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.857780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.868959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.734 [2024-07-15 23:27:55.869000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.734 [2024-07-15 23:27:55.869032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.734 [2024-07-15 23:27:55.879453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.879494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.879513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.888168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.888201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.888220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.896261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.896293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.896312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.904257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.904289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.904307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.912608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.912642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.912661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.920878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.920937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.930933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.930960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.930991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.941461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.941494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.941512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.952180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.952212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.962359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.962392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.962411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.973272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.973305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.973323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.984820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.984877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:55.997693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:55.997731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:55.997761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:56.009446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:56.009480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:56.009500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:56.022110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:56.022143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:56.022161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:56.034980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:56.035006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:56.035037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.735 [2024-07-15 23:27:56.047861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.735 [2024-07-15 23:27:56.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.735 [2024-07-15 23:27:56.047913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.060797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.060824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.060854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.073992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.074055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.087682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.087713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.087731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.097249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.097301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.106489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.106522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.106540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.115270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.115320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.123945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.123972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.124002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.132899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.132942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.132958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.142213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.142251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.142271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.150995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.151058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.160059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.160101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.160120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.168970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.168998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.169015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.177958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.177986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.178018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.186654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.186689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.186708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.194937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.194964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.194995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.203165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.994 [2024-07-15 23:27:56.203206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.994 [2024-07-15 23:27:56.203226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.994 [2024-07-15 23:27:56.211632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.211665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.211689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.219747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.219792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.219808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.227946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.227974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.228004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.236342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.244635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.244667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.244685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.252866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.252892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.252923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.260919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.260947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.260978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.268913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.268959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.268977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.277630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.277658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.277687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.286491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.286560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.297463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.297495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.297537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.995 [2024-07-15 23:27:56.306622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:40.995 [2024-07-15 23:27:56.306656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.995 [2024-07-15 23:27:56.306674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.253 [2024-07-15 23:27:56.314935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:41.253 [2024-07-15 23:27:56.314973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.253 [2024-07-15 23:27:56.315004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.253 [2024-07-15 23:27:56.323132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:41.253 [2024-07-15 23:27:56.323161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.253 [2024-07-15 23:27:56.323191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.253 [2024-07-15 23:27:56.331354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:41.253 [2024-07-15 23:27:56.331380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.253 [2024-07-15 23:27:56.331417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.253 [2024-07-15 23:27:56.340705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:41.253 [2024-07-15 23:27:56.340755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.253 [2024-07-15 23:27:56.340771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.253 [2024-07-15 23:27:56.351635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392d10) 00:24:41.253 [2024-07-15 23:27:56.351663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.253 [2024-07-15 23:27:56.351694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.253 00:24:41.253 Latency(us) 00:24:41.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.253 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:41.253 nvme0n1 : 2.00 2874.35 359.29 0.00 0.00 5562.26 3640.89 14272.28 00:24:41.253 =================================================================================================================== 00:24:41.253 Total : 2874.35 359.29 0.00 0.00 5562.26 3640.89 14272.28 00:24:41.253 0 00:24:41.253 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:41.253 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:41.253 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:41.253 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:41.253 | .driver_specific 00:24:41.253 | .nvme_error 00:24:41.253 | .status_code 00:24:41.253 | .command_transient_transport_error' 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 185 > 0 )) 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2438719 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2438719 ']' 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2438719 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438719 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438719' 00:24:41.511 killing process with pid 2438719 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2438719 00:24:41.511 Received shutdown signal, test time was about 2.000000 seconds 00:24:41.511 00:24:41.511 Latency(us) 00:24:41.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.511 =================================================================================================================== 00:24:41.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.511 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2438719 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2439130 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2439130 /var/tmp/bperf.sock 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2439130 ']' 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.770 23:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:41.770 [2024-07-15 23:27:56.968814] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:41.770 [2024-07-15 23:27:56.968894] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439130 ] 00:24:41.770 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.770 [2024-07-15 23:27:57.029424] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.066 [2024-07-15 23:27:57.140687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.066 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.066 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:42.066 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:42.066 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.353 23:27:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.919 nvme0n1 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:42.919 23:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.919 Running I/O for 2 seconds... 00:24:42.919 [2024-07-15 23:27:58.174547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e3498 00:24:42.919 [2024-07-15 23:27:58.175363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.919 [2024-07-15 23:27:58.175407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.919 [2024-07-15 23:27:58.188047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f5378 00:24:42.919 [2024-07-15 23:27:58.188794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.919 [2024-07-15 23:27:58.188822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.919 [2024-07-15 23:27:58.201503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f0350 00:24:42.919 [2024-07-15 23:27:58.202538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.919 [2024-07-15 23:27:58.202571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.919 [2024-07-15 23:27:58.214671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f6cc8 00:24:42.919 [2024-07-15 23:27:58.215997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.919 [2024-07-15 23:27:58.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.919 [2024-07-15 23:27:58.227469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7da8 00:24:42.919 [2024-07-15 23:27:58.228814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.919 [2024-07-15 23:27:58.228840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.240624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e4578 00:24:43.177 [2024-07-15 23:27:58.242053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.242079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.254101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7538 00:24:43.177 [2024-07-15 23:27:58.255563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.255589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.266124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7100 00:24:43.177 [2024-07-15 23:27:58.267756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.267810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.277008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dece0 00:24:43.177 [2024-07-15 23:27:58.278222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.278248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.288424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ddc00 00:24:43.177 [2024-07-15 23:27:58.289627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.289652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.299702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f0bc0 00:24:43.177 [2024-07-15 23:27:58.300928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.300954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.311025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1710 00:24:43.177 [2024-07-15 23:27:58.312245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.312271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.322304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f1ca0 00:24:43.177 [2024-07-15 23:27:58.323507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.323532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.333549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f46d0 00:24:43.177 [2024-07-15 23:27:58.334760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.334787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.344876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f57b0 00:24:43.177 [2024-07-15 23:27:58.346083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.346109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.356163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e38d0 00:24:43.177 [2024-07-15 23:27:58.357398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.368954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e49b0 00:24:43.177 [2024-07-15 23:27:58.370639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.370665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.379471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190de470 00:24:43.177 [2024-07-15 23:27:58.380755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.389672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190feb58 00:24:43.177 [2024-07-15 23:27:58.391345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.391371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.399864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f1868 00:24:43.177 [2024-07-15 23:27:58.400675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.400709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.411830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8618 00:24:43.177 [2024-07-15 23:27:58.412798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.177 [2024-07-15 23:27:58.412825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.177 [2024-07-15 23:27:58.423560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dece0 00:24:43.178 [2024-07-15 23:27:58.424655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.424681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.436032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190df550 00:24:43.178 [2024-07-15 23:27:58.437343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.437375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.447930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb480 00:24:43.178 [2024-07-15 23:27:58.449366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.449392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.459632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8618 00:24:43.178 [2024-07-15 23:27:58.461210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.471384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e4578 00:24:43.178 [2024-07-15 23:27:58.473128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.473155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.483147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1710 00:24:43.178 [2024-07-15 23:27:58.484991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.178 [2024-07-15 23:27:58.485017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.178 [2024-07-15 23:27:58.491445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e73e0 00:24:43.436 [2024-07-15 23:27:58.492358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.492386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.503701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f3a28 00:24:43.436 [2024-07-15 23:27:58.504688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.504714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.515537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ed0b0 00:24:43.436 [2024-07-15 23:27:58.516713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.516759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.526935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ebfd0 00:24:43.436 [2024-07-15 23:27:58.528078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.528104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.537452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dfdc0 00:24:43.436 [2024-07-15 23:27:58.538618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.538644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.550222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e95a0 00:24:43.436 [2024-07-15 23:27:58.551607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.551633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.562275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fa7d8 00:24:43.436 [2024-07-15 23:27:58.563705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.563752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.572817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fac10 00:24:43.436 [2024-07-15 23:27:58.574120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.574146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.584322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb048 00:24:43.436 [2024-07-15 23:27:58.585639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.585665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.595805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ea680 00:24:43.436 [2024-07-15 23:27:58.597145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.436 [2024-07-15 23:27:58.597170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.436 [2024-07-15 23:27:58.607191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4f40 00:24:43.437 [2024-07-15 23:27:58.608548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.608573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.618492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5a90 00:24:43.437 [2024-07-15 23:27:58.619810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.629807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e0a68 00:24:43.437 [2024-07-15 23:27:58.631148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.631174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.641225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e2c28 00:24:43.437 [2024-07-15 23:27:58.642503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.642529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.652712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f2948 00:24:43.437 [2024-07-15 23:27:58.654061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.654086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.663519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190efae0 00:24:43.437 [2024-07-15 23:27:58.665226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.665251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.673352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8618 00:24:43.437 [2024-07-15 23:27:58.674211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.674235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.685924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fbcf0 00:24:43.437 [2024-07-15 23:27:58.687006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.687048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.698325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190df118 00:24:43.437 [2024-07-15 23:27:58.699511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.699541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.710211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f1868 00:24:43.437 [2024-07-15 23:27:58.711431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.711456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.721469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f6cc8 00:24:43.437 [2024-07-15 23:27:58.722684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.722709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.732809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ebfd0 00:24:43.437 [2024-07-15 23:27:58.734037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.734086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.437 [2024-07-15 23:27:58.744117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fa7d8 00:24:43.437 [2024-07-15 23:27:58.745340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.437 [2024-07-15 23:27:58.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.694 [2024-07-15 23:27:58.756165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb8b8 00:24:43.694 [2024-07-15 23:27:58.757413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.694 [2024-07-15 23:27:58.757438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.694 [2024-07-15 23:27:58.767522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4f40 00:24:43.695 [2024-07-15 23:27:58.768745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.768770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.778831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5a90 00:24:43.695 [2024-07-15 23:27:58.780040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.780066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.790132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e0a68 00:24:43.695 [2024-07-15 23:27:58.791351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.791376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.801471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e9168 00:24:43.695 [2024-07-15 23:27:58.802733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.802776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.812875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8618 00:24:43.695 [2024-07-15 23:27:58.814076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.824156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1710 00:24:43.695 [2024-07-15 23:27:58.825369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.825394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.835398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dfdc0 00:24:43.695 [2024-07-15 23:27:58.836612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.836636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.846658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190de8a8 00:24:43.695 [2024-07-15 23:27:58.847871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.847896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.858034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7538 00:24:43.695 [2024-07-15 23:27:58.859271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.859296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.869309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fe720 00:24:43.695 [2024-07-15 23:27:58.870499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.870524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.880591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fd640 00:24:43.695 [2024-07-15 23:27:58.881816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.881841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.891868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f5be8 00:24:43.695 [2024-07-15 23:27:58.893097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.893123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.903166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ddc00 00:24:43.695 [2024-07-15 23:27:58.904386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.904411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.914444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f92c0 00:24:43.695 [2024-07-15 23:27:58.915658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.915683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.925750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ebb98 00:24:43.695 [2024-07-15 23:27:58.926942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.926969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.937059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fac10 00:24:43.695 [2024-07-15 23:27:58.938368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.938396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.948897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4298 00:24:43.695 [2024-07-15 23:27:58.950154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.950180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.960514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e3498 00:24:43.695 [2024-07-15 23:27:58.961759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.971894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e6738 00:24:43.695 [2024-07-15 23:27:58.973115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.973140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.983185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eee38 00:24:43.695 [2024-07-15 23:27:58.984398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.984423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:58.994458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ef270 00:24:43.695 [2024-07-15 23:27:58.995670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:58.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.695 [2024-07-15 23:27:59.005790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1b48 00:24:43.695 [2024-07-15 23:27:59.007107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.695 [2024-07-15 23:27:59.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.017831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e01f8 00:24:43.954 [2024-07-15 23:27:59.019053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.019079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.029133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dece0 00:24:43.954 [2024-07-15 23:27:59.030338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.030363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.040383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ee190 00:24:43.954 [2024-07-15 23:27:59.041590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.041614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.051602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ed920 00:24:43.954 [2024-07-15 23:27:59.052812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.052839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.062911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fe2e8 00:24:43.954 [2024-07-15 23:27:59.064130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.074174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190df118 00:24:43.954 [2024-07-15 23:27:59.075388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.075412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.085398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f1868 00:24:43.954 [2024-07-15 23:27:59.086632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.086657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.096675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f6cc8 00:24:43.954 [2024-07-15 23:27:59.097894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.097921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.107998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ebfd0 00:24:43.954 [2024-07-15 23:27:59.109222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.109247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.119312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fa7d8 00:24:43.954 [2024-07-15 23:27:59.120509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.130605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb8b8 00:24:43.954 [2024-07-15 23:27:59.131797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.131823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.141862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4f40 00:24:43.954 [2024-07-15 23:27:59.143112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.143137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.153138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5a90 00:24:43.954 [2024-07-15 23:27:59.154353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.154378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.164428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e0a68 00:24:43.954 [2024-07-15 23:27:59.165650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.165675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.175934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e9168 00:24:43.954 [2024-07-15 23:27:59.177186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.177211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.954 [2024-07-15 23:27:59.187488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8618 00:24:43.954 [2024-07-15 23:27:59.188708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.954 [2024-07-15 23:27:59.188757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.200907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ec840 00:24:43.955 [2024-07-15 23:27:59.202439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.202469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.214102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e23b8 00:24:43.955 [2024-07-15 23:27:59.215607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.215638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.226859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e7818 00:24:43.955 [2024-07-15 23:27:59.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.228375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.241194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f57b0 00:24:43.955 [2024-07-15 23:27:59.243242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.243273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.250157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eea00 00:24:43.955 [2024-07-15 23:27:59.251022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.251060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:43.955 [2024-07-15 23:27:59.262123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ed4e8 00:24:43.955 [2024-07-15 23:27:59.262978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.955 [2024-07-15 23:27:59.263003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:44.212 [2024-07-15 23:27:59.275644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e4de8 00:24:44.212 [2024-07-15 23:27:59.276780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.212 [2024-07-15 23:27:59.276806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:44.212 [2024-07-15 23:27:59.289912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fcdd0 00:24:44.212 [2024-07-15 23:27:59.291216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.212 [2024-07-15 23:27:59.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.212 [2024-07-15 23:27:59.302756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e3060 00:24:44.212 [2024-07-15 23:27:59.304096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.212 [2024-07-15 23:27:59.304133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.212 [2024-07-15 23:27:59.315535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f46d0 00:24:44.212 [2024-07-15 23:27:59.316804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.316830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.329819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb048 00:24:44.213 [2024-07-15 23:27:59.331689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.331720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.343015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fc560 00:24:44.213 [2024-07-15 23:27:59.345076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.345107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.351995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190de8a8 00:24:44.213 [2024-07-15 23:27:59.352883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.352908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.366632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e84c0 00:24:44.213 [2024-07-15 23:27:59.367812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.367837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.379592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f2d80 00:24:44.213 [2024-07-15 23:27:59.381093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.381124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.392402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f3a28 00:24:44.213 [2024-07-15 23:27:59.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.405677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eb760 00:24:44.213 [2024-07-15 23:27:59.407210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.407241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.417758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190de8a8 00:24:44.213 [2024-07-15 23:27:59.419374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.419404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.429159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5a90 00:24:44.213 [2024-07-15 23:27:59.430103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.430148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.442421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f31b8 00:24:44.213 [2024-07-15 23:27:59.443525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.443552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.455462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e8088 00:24:44.213 [2024-07-15 23:27:59.456600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.456632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.468179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ed0b0 00:24:44.213 [2024-07-15 23:27:59.469304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.469334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.480918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fd208 00:24:44.213 [2024-07-15 23:27:59.482066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.482110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.493696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f5be8 00:24:44.213 [2024-07-15 23:27:59.494835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.494860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.506448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f20d8 00:24:44.213 [2024-07-15 23:27:59.507601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.507632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.213 [2024-07-15 23:27:59.519174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb8b8 00:24:44.213 [2024-07-15 23:27:59.520217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.213 [2024-07-15 23:27:59.520249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.533715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4f40 00:24:44.471 [2024-07-15 23:27:59.535465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.535497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.547111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5220 00:24:44.471 [2024-07-15 23:27:59.548983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.549009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.560702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f6458 00:24:44.471 [2024-07-15 23:27:59.562804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.562830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.569699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f2948 00:24:44.471 [2024-07-15 23:27:59.570586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.570616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.581782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dece0 00:24:44.471 [2024-07-15 23:27:59.582635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.582666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.595122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fcdd0 00:24:44.471 [2024-07-15 23:27:59.596157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.596198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.608414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e12d8 00:24:44.471 [2024-07-15 23:27:59.609633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.609665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.622581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1f80 00:24:44.471 [2024-07-15 23:27:59.623975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.624002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.635691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fbcf0 00:24:44.471 [2024-07-15 23:27:59.637226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.471 [2024-07-15 23:27:59.637263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.471 [2024-07-15 23:27:59.646583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ec840 00:24:44.472 [2024-07-15 23:27:59.647258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.647297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.659832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eb760 00:24:44.472 [2024-07-15 23:27:59.660707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.660746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.674703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ec408 00:24:44.472 [2024-07-15 23:27:59.676647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.676678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.686753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1710 00:24:44.472 [2024-07-15 23:27:59.688175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.688205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.698335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f4298 00:24:44.472 [2024-07-15 23:27:59.700294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.711821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f6cc8 00:24:44.472 [2024-07-15 23:27:59.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.726835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eb760 00:24:44.472 [2024-07-15 23:27:59.728797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.728822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.737515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fcdd0 00:24:44.472 [2024-07-15 23:27:59.738334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.751957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f35f0 00:24:44.472 [2024-07-15 23:27:59.753746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.753789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.762844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e6b70 00:24:44.472 [2024-07-15 23:27:59.763989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.764014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.472 [2024-07-15 23:27:59.775506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e99d8 00:24:44.472 [2024-07-15 23:27:59.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.472 [2024-07-15 23:27:59.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.788493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ed920 00:24:44.729 [2024-07-15 23:27:59.789634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.801412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e4578 00:24:44.729 [2024-07-15 23:27:59.802546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.802577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.814248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fa3a0 00:24:44.729 [2024-07-15 23:27:59.815340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.815370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.827076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e8088 00:24:44.729 [2024-07-15 23:27:59.828182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.828213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.839935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ef270 00:24:44.729 [2024-07-15 23:27:59.841050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.841095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.852695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eb760 00:24:44.729 [2024-07-15 23:27:59.853796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.853821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.865515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e23b8 00:24:44.729 [2024-07-15 23:27:59.866682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.878283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7100 00:24:44.729 [2024-07-15 23:27:59.879403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.879434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.891050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e0a68 00:24:44.729 [2024-07-15 23:27:59.892148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.892179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.904068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190ea680 00:24:44.729 [2024-07-15 23:27:59.904921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.904946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.919420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f8e88 00:24:44.729 [2024-07-15 23:27:59.921521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.729 [2024-07-15 23:27:59.921551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.729 [2024-07-15 23:27:59.928533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eaab8 00:24:44.730 [2024-07-15 23:27:59.929451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.929481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:27:59.941488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e4de8 00:24:44.730 [2024-07-15 23:27:59.942430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.942461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:27:59.954236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f9f68 00:24:44.730 [2024-07-15 23:27:59.955172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.955198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:27:59.967206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f3e60 00:24:44.730 [2024-07-15 23:27:59.968112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.968148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:27:59.980478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f46d0 00:24:44.730 [2024-07-15 23:27:59.981568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.981598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:27:59.993411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e6fa8 00:24:44.730 [2024-07-15 23:27:59.994530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:27:59.994561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:28:00.005711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dece0 00:24:44.730 [2024-07-15 23:28:00.007145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:28:00.007198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:28:00.020710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f1868 00:24:44.730 [2024-07-15 23:28:00.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:28:00.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:44.730 [2024-07-15 23:28:00.035546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f5378 00:24:44.730 [2024-07-15 23:28:00.037041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.730 [2024-07-15 23:28:00.037073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.048799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fb8b8 00:24:44.987 [2024-07-15 23:28:00.050277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.050310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.062111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190f7970 00:24:44.987 [2024-07-15 23:28:00.063595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.074923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e27f0 00:24:44.987 [2024-07-15 23:28:00.076374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.076405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.087761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fc560 00:24:44.987 [2024-07-15 23:28:00.089232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.089274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.100517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190dfdc0 00:24:44.987 [2024-07-15 23:28:00.101989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.102013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.113293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190eee38 00:24:44.987 [2024-07-15 23:28:00.114746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.114790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.126011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190df118 00:24:44.987 [2024-07-15 23:28:00.127468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.127498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.138831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e1f80 00:24:44.987 [2024-07-15 23:28:00.140276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.140307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.151543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190fef90 00:24:44.987 [2024-07-15 23:28:00.153006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 [2024-07-15 23:28:00.164244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1618b40) with pdu=0x2000190e5a90 00:24:44.987 [2024-07-15 23:28:00.165678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.987 [2024-07-15 23:28:00.165709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.987 00:24:44.987 Latency(us) 00:24:44.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.987 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:44.987 nvme0n1 : 2.01 21009.63 82.07 0.00 0.00 6082.05 2827.76 15922.82 00:24:44.987 =================================================================================================================== 00:24:44.987 Total : 21009.63 82.07 0.00 0.00 6082.05 2827.76 15922.82 00:24:44.987 0 00:24:44.987 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:44.987 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:44.987 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:44.987 | .driver_specific 00:24:44.987 | .nvme_error 00:24:44.987 | .status_code 00:24:44.987 | .command_transient_transport_error' 00:24:44.988 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2439130 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2439130 ']' 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2439130 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2439130 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2439130' 00:24:45.245 killing process with pid 2439130 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2439130 00:24:45.245 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.245 00:24:45.245 Latency(us) 00:24:45.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.245 =================================================================================================================== 00:24:45.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.245 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2439130 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2439591 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2439591 /var/tmp/bperf.sock 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2439591 ']' 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.503 23:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.503 [2024-07-15 23:28:00.757710] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:45.503 [2024-07-15 23:28:00.757840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439591 ] 00:24:45.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:45.503 Zero copy mechanism will not be used. 00:24:45.503 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.760 [2024-07-15 23:28:00.818576] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.760 [2024-07-15 23:28:00.927185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.760 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.760 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:45.760 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.760 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:46.017 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:46.017 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.017 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.017 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.017 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.018 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.583 nvme0n1 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:46.583 23:28:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:46.583 Zero copy mechanism will not be used. 00:24:46.583 Running I/O for 2 seconds... 00:24:46.583 [2024-07-15 23:28:01.749471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.583 [2024-07-15 23:28:01.749900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.583 [2024-07-15 23:28:01.749934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.583 [2024-07-15 23:28:01.762656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.583 [2024-07-15 23:28:01.763011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.583 [2024-07-15 23:28:01.763040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.583 [2024-07-15 23:28:01.775502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.583 [2024-07-15 23:28:01.775893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.775922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.783203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.783597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.783630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.790422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.790792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.790820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.797667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.798052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.798085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.804625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.804955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.812806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.813183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.813215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.820599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.820959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.820989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.829269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.829609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.829641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.837633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.837959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.837986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.845493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.845842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.845890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.852972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.853329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.853361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.861250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.861590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.861622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.869469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.869824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.869852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.877767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.885512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.885719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.885759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.584 [2024-07-15 23:28:01.895140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.584 [2024-07-15 23:28:01.895484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.584 [2024-07-15 23:28:01.895521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.904526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.904704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.904735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.913424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.913811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.922117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.922495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.930082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.930473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.930505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.938646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.938975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.939002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.948035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.948387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.948419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.956395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.956762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.956809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.964260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.964604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.964636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.971510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.971891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.971933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.979235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.979576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.979607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.986310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.986689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.986719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:01.993630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:01.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:01.994068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:02.002564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:02.002907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:02.002934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:02.010900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.843 [2024-07-15 23:28:02.011270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.843 [2024-07-15 23:28:02.011303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.843 [2024-07-15 23:28:02.018568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.018914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.018942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.026526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.026903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.035020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.035415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.035442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.043440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.043884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.043912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.051808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.052169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.058826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.059213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.065498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.065985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.066013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.072193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.072614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.072645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.079555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.079899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.079928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.087346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.087798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.087825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.095584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.095908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.095937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.102660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.102970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.108671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.109033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.114852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.115183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.115210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.121149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.121459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.121486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.128305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.128618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.128644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.135665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.135992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.136020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.142991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.143304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.149056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.149356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.149382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.844 [2024-07-15 23:28:02.155031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:46.844 [2024-07-15 23:28:02.155421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.844 [2024-07-15 23:28:02.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.161406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.161725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.161760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.168294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.168594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.175627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.175972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.176008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.182067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.182361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.182387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.188116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.188419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.188446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.194287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.194601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.194627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.200382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.200680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.200706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.206593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.206926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.206953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.214272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.214572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.214599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.220457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.220782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.220810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.226836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.227162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.227188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.233175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.233567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.233607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.240775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.241134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.241161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.248766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.249092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.249118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.256526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.256878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.256907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.264247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.264538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.264566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.272251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.272595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.280268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.280582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.280608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.288315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.288622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.288648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.296273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.296595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.296621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.303821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.303931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.303958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.312011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.312349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.312376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.105 [2024-07-15 23:28:02.319850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.105 [2024-07-15 23:28:02.320293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.105 [2024-07-15 23:28:02.320319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.327826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.328265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.328291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.335668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.336084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.336111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.343606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.343939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.343967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.351314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.351704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.351750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.359676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.360015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.360042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.367696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.368035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.368071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.375474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.375812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.375839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.383575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.383920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.383947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.391637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.391971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.391997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.401190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.401534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.410076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.410471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.410512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.106 [2024-07-15 23:28:02.418412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.106 [2024-07-15 23:28:02.418769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.106 [2024-07-15 23:28:02.418798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.364 [2024-07-15 23:28:02.427439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.364 [2024-07-15 23:28:02.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.364 [2024-07-15 23:28:02.427643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.364 [2024-07-15 23:28:02.436779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.364 [2024-07-15 23:28:02.437191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.364 [2024-07-15 23:28:02.437232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.364 [2024-07-15 23:28:02.446157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.364 [2024-07-15 23:28:02.446491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.364 [2024-07-15 23:28:02.446518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.364 [2024-07-15 23:28:02.455185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.364 [2024-07-15 23:28:02.455497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.364 [2024-07-15 23:28:02.455524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.364 [2024-07-15 23:28:02.464765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.364 [2024-07-15 23:28:02.465113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.364 [2024-07-15 23:28:02.465140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.474556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.474768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.474795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.484050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.493421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.493756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.493799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.502765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.503104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.503131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.511979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.512319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.512346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.522057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.522451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.522478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.531935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.532268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.532294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.540287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.540733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.548651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.548978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.549005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.557095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.557500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.566624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.566961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.566989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.576770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.577205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.577231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.586651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.586981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.587009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.595783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.596218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.596244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.605734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.605968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.606006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.614454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.614584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.614610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.623553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.623938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.623978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.631853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.632237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.632279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.640047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.640358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.640385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.647876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.648207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.648233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.655486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.655808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.655836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.663505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.663861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.365 [2024-07-15 23:28:02.671301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.365 [2024-07-15 23:28:02.671597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.365 [2024-07-15 23:28:02.671624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.679329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.679644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.679672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.687325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.687624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.687652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.695280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.695578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.695605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.702701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.703021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.703050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.709064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.709361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.709388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.715539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.715846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.715875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.721666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.721971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.722000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.727956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.728254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.728282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.734432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.734733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.741252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.741554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.741580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.747674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.747978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.748006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.754793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.755130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.624 [2024-07-15 23:28:02.761910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.624 [2024-07-15 23:28:02.762217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.624 [2024-07-15 23:28:02.762246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.769195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.769504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.775349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.775625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.775652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.781960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.782272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.788437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.788716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.788767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.794692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.794997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.795040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.800556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.800873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.800902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.806334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.806610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.812799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.813091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.813118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.819195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.819477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.819503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.826469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.826798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.833771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.834088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.834130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.840062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.840334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.840362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.846081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.846355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.846389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.853512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.853933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.853961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.860897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.861189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.861217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.867399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.867713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.867763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.874158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.874429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.874455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.880240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.880515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.880541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.886541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.886855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.886883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.892921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.893209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.893235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.898913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.899217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.899244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.905005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.905284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.905322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.910694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.910999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.911037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.916836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.917151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.917177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.923917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.924210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.931115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.625 [2024-07-15 23:28:02.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.625 [2024-07-15 23:28:02.931416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.625 [2024-07-15 23:28:02.938501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.938794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.938823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.946378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.946666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.946694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.954387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.954663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.954689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.961969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.962307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.962333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.969804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.970108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.970146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.977675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.977978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.978006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.984890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.985214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.985241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.992307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.884 [2024-07-15 23:28:02.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.884 [2024-07-15 23:28:02.992611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.884 [2024-07-15 23:28:02.999552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:02.999864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:02.999891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.007579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.007883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.007910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.015374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.015695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.015723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.022311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.022613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.022644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.029100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.029404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.029435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.035472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.035799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.035827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.042122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.042427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.048542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.048861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.048889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.054941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.055263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.055295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.061027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.061336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.061368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.068170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.068471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.068501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.075127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.075428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.075459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.081426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.081726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.081780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.087591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.087906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.087937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.094968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.095291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.095322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.101581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.101890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.101918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.107837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.108140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.108171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.114310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.114612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.114643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.120657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.120949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.120976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.126964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.127284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.127315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.133181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.133482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.139651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.139942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.139969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.146039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.146389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.153348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.153654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.153684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.159664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.159956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.159984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.165878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.166274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.166305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.172526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.172845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.179659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.179963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.179989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.885 [2024-07-15 23:28:03.186989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.885 [2024-07-15 23:28:03.187306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.885 [2024-07-15 23:28:03.187337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.886 [2024-07-15 23:28:03.193194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:47.886 [2024-07-15 23:28:03.193495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.886 [2024-07-15 23:28:03.193526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.152 [2024-07-15 23:28:03.199732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.200082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.200120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.206331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.206645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.206677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.213989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.214286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.214318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.220692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.220976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.221003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.228433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.228885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.236457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.236811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.236838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.243719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.244123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.250180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.250487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.250519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.256889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.257208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.257240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.263839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.264169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.264200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.272027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.272383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.272414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.279550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.279871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.279900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.286837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.287167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.293626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.293934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.293961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.300974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.301276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.301307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.308031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.308349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.315842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.316158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.316191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.322934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.323259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.323290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.330828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.331180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.331212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.338412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.338767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.153 [2024-07-15 23:28:03.338809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.153 [2024-07-15 23:28:03.345376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.153 [2024-07-15 23:28:03.345688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.345719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.352357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.352669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.352699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.359108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.359515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.365723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.366069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.366100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.372911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.373282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.373313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.380850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.381130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.381156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.387598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.387907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.387939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.394221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.394581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.401223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.401528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.401566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.407994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.408293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.408324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.415488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.415810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.415837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.423968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.424293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.424323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.432963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.433365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.433396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.441983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.442394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.442424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.451165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.451529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.451561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.154 [2024-07-15 23:28:03.460324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.154 [2024-07-15 23:28:03.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.154 [2024-07-15 23:28:03.460797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.469587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.469897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.469925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.478260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.478637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.478668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.487885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.488292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.488323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.497271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.497671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.497702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.506477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.506917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.506943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.515895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.516286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.516317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.524825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.525299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.525325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.534309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.534621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.534652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.541441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.541759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.541803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.548897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.549237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.549267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.555958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.556286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.556317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.564823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.565162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.565193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.573185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.573592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.414 [2024-07-15 23:28:03.573622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.414 [2024-07-15 23:28:03.581662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.414 [2024-07-15 23:28:03.581969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.581996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.589372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.589683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.589714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.596903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.597322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.597353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.604820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.605123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.605161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.611697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.612073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.618756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.619035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.619080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.625587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.625904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.625931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.633317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.633722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.633763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.640137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.640442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.640473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.646371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.646675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.646706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.653390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.653690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.660926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.661247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.661279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.667822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.668142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.668172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.674457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.674784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.674811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.680909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.681225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.681256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.687531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.687847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.687873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.694046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.694350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.694381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.701132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.701440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.701471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.707832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.708179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.708210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.714168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.714472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.714503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.720546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.415 [2024-07-15 23:28:03.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.415 [2024-07-15 23:28:03.720892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.415 [2024-07-15 23:28:03.728423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.672 [2024-07-15 23:28:03.728728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-15 23:28:03.728783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.672 [2024-07-15 23:28:03.735264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144de60) with pdu=0x2000190fef90 00:24:48.672 [2024-07-15 23:28:03.735568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.672 [2024-07-15 23:28:03.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.672 00:24:48.672 Latency(us) 00:24:48.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.672 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:48.672 nvme0n1 : 2.00 4094.32 511.79 0.00 0.00 3898.75 2706.39 13592.65 00:24:48.672 =================================================================================================================== 00:24:48.672 Total : 4094.32 511.79 0.00 0.00 3898.75 2706.39 13592.65 00:24:48.672 0 00:24:48.672 23:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:48.672 23:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:48.673 23:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:48.673 23:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:48.673 | .driver_specific 00:24:48.673 | .nvme_error 00:24:48.673 | .status_code 00:24:48.673 | .command_transient_transport_error' 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 264 > 0 )) 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2439591 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2439591 ']' 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2439591 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2439591 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2439591' 00:24:48.930 killing process with pid 2439591 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2439591 00:24:48.930 Received shutdown signal, test time was about 2.000000 seconds 00:24:48.930 00:24:48.930 Latency(us) 00:24:48.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.930 =================================================================================================================== 00:24:48.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.930 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2439591 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2438173 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2438173 ']' 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2438173 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438173 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438173' 00:24:49.188 killing process with pid 2438173 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2438173 00:24:49.188 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2438173 00:24:49.446 00:24:49.446 real 0m15.494s 00:24:49.446 user 0m30.314s 00:24:49.446 sys 0m4.662s 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.446 ************************************ 00:24:49.446 END TEST nvmf_digest_error 00:24:49.446 ************************************ 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.446 rmmod nvme_tcp 00:24:49.446 rmmod nvme_fabrics 00:24:49.446 rmmod nvme_keyring 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2438173 ']' 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2438173 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2438173 ']' 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2438173 00:24:49.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2438173) - No such process 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2438173 is not found' 00:24:49.446 Process with pid 2438173 is not found 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.446 23:28:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.976 23:28:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:51.976 00:24:51.976 real 0m36.404s 00:24:51.976 user 1m3.190s 00:24:51.976 sys 0m10.876s 00:24:51.976 23:28:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.976 23:28:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:51.976 ************************************ 00:24:51.976 END TEST nvmf_digest 00:24:51.976 ************************************ 00:24:51.976 23:28:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:51.976 23:28:06 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:51.976 23:28:06 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:51.976 23:28:06 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:51.976 23:28:06 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:51.976 23:28:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.976 23:28:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.976 23:28:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:51.976 ************************************ 00:24:51.976 START TEST nvmf_bdevperf 00:24:51.976 ************************************ 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:51.976 * Looking for test storage... 00:24:51.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.976 23:28:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:53.877 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:53.877 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.877 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:53.878 Found net devices under 0000:84:00.0: cvl_0_0 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:53.878 Found net devices under 0000:84:00.1: cvl_0_1 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:24:53.878 00:24:53.878 --- 10.0.0.2 ping statistics --- 00:24:53.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.878 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:53.878 00:24:53.878 --- 10.0.0.1 ping statistics --- 00:24:53.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.878 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2442024 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2442024 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2442024 ']' 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.878 23:28:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.878 [2024-07-15 23:28:09.036811] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:53.878 [2024-07-15 23:28:09.036887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.878 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.878 [2024-07-15 23:28:09.106500] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:54.136 [2024-07-15 23:28:09.224447] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.136 [2024-07-15 23:28:09.224517] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.136 [2024-07-15 23:28:09.224533] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.136 [2024-07-15 23:28:09.224547] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.136 [2024-07-15 23:28:09.224560] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.136 [2024-07-15 23:28:09.224659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.136 [2024-07-15 23:28:09.224785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.136 [2024-07-15 23:28:09.224790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.701 23:28:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.701 [2024-07-15 23:28:09.998343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.701 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.701 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:54.701 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.701 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.959 Malloc0 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.959 [2024-07-15 23:28:10.056345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:54.959 { 00:24:54.959 "params": { 00:24:54.959 "name": "Nvme$subsystem", 00:24:54.959 "trtype": "$TEST_TRANSPORT", 00:24:54.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.959 "adrfam": "ipv4", 00:24:54.959 "trsvcid": "$NVMF_PORT", 00:24:54.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.959 "hdgst": ${hdgst:-false}, 00:24:54.959 "ddgst": ${ddgst:-false} 00:24:54.959 }, 00:24:54.959 "method": "bdev_nvme_attach_controller" 00:24:54.959 } 00:24:54.959 EOF 00:24:54.959 )") 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:54.959 23:28:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:54.959 "params": { 00:24:54.959 "name": "Nvme1", 00:24:54.959 "trtype": "tcp", 00:24:54.959 "traddr": "10.0.0.2", 00:24:54.959 "adrfam": "ipv4", 00:24:54.959 "trsvcid": "4420", 00:24:54.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.959 "hdgst": false, 00:24:54.959 "ddgst": false 00:24:54.959 }, 00:24:54.959 "method": "bdev_nvme_attach_controller" 00:24:54.959 }' 00:24:54.959 [2024-07-15 23:28:10.102485] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:54.959 [2024-07-15 23:28:10.102558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442179 ] 00:24:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.959 [2024-07-15 23:28:10.163304] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.959 [2024-07-15 23:28:10.273107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.524 Running I/O for 1 seconds... 00:24:56.455 00:24:56.455 Latency(us) 00:24:56.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.455 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.455 Verification LBA range: start 0x0 length 0x4000 00:24:56.455 Nvme1n1 : 1.01 8828.46 34.49 0.00 0.00 14439.86 2997.67 17087.91 00:24:56.455 =================================================================================================================== 00:24:56.455 Total : 8828.46 34.49 0.00 0.00 14439.86 2997.67 17087.91 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2442323 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.713 { 00:24:56.713 "params": { 00:24:56.713 "name": "Nvme$subsystem", 00:24:56.713 "trtype": "$TEST_TRANSPORT", 00:24:56.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.713 "adrfam": "ipv4", 00:24:56.713 "trsvcid": "$NVMF_PORT", 00:24:56.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.713 "hdgst": ${hdgst:-false}, 00:24:56.713 "ddgst": ${ddgst:-false} 00:24:56.713 }, 00:24:56.713 "method": "bdev_nvme_attach_controller" 00:24:56.713 } 00:24:56.713 EOF 00:24:56.713 )") 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:56.713 23:28:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:56.713 "params": { 00:24:56.713 "name": "Nvme1", 00:24:56.713 "trtype": "tcp", 00:24:56.713 "traddr": "10.0.0.2", 00:24:56.713 "adrfam": "ipv4", 00:24:56.713 "trsvcid": "4420", 00:24:56.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.713 "hdgst": false, 00:24:56.713 "ddgst": false 00:24:56.713 }, 00:24:56.713 "method": "bdev_nvme_attach_controller" 00:24:56.713 }' 00:24:56.713 [2024-07-15 23:28:11.920801] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:24:56.713 [2024-07-15 23:28:11.920888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442323 ] 00:24:56.713 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.713 [2024-07-15 23:28:11.982128] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.994 [2024-07-15 23:28:12.093401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.250 Running I/O for 15 seconds... 00:24:59.798 23:28:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2442024 00:24:59.798 23:28:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:59.798 [2024-07-15 23:28:14.891419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.798 [2024-07-15 23:28:14.891675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.798 [2024-07-15 23:28:14.891707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.798 [2024-07-15 23:28:14.891747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.798 [2024-07-15 23:28:14.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.891992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.798 [2024-07-15 23:28:14.892747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.798 [2024-07-15 23:28:14.892765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.892995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.799 [2024-07-15 23:28:14.893975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.893989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.799 [2024-07-15 23:28:14.894185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.799 [2024-07-15 23:28:14.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.894965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.894985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.800 [2024-07-15 23:28:14.895287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.800 [2024-07-15 23:28:14.895612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.800 [2024-07-15 23:28:14.895629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.801 [2024-07-15 23:28:14.895644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.895661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.801 [2024-07-15 23:28:14.895675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.895692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.801 [2024-07-15 23:28:14.895707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.895723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.801 [2024-07-15 23:28:14.895956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.895981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.801 [2024-07-15 23:28:14.895995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ead70 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.896049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:59.801 [2024-07-15 23:28:14.896063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:59.801 [2024-07-15 23:28:14.896077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:24:59.801 [2024-07-15 23:28:14.896091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896165] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ead70 was disconnected and freed. reset controller. 00:24:59.801 [2024-07-15 23:28:14.896248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.801 [2024-07-15 23:28:14.896271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.801 [2024-07-15 23:28:14.896303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.801 [2024-07-15 23:28:14.896333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.801 [2024-07-15 23:28:14.896362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.801 [2024-07-15 23:28:14.896376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.899962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.900001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.900816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.900846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.900863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.901110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.901357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.901380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.901400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.904987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.914012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.914530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.914566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.914584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.914844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.915079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.915117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.915133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.918696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.927948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.928496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.928544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.928562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.928824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.929056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.929080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.929095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.932672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.941965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.942463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.942513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.942531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.942780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.943023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.943047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.943062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.946635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.955925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.956424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.956473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.956490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.956728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.956981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.957005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.957020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.960599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.969889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.970404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.970435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.970452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.970690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.970943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.970968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.970983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.974556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.983837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.984327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.984357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.984375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.984613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.984869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.984893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.984908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.801 [2024-07-15 23:28:14.988482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.801 [2024-07-15 23:28:14.997770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.801 [2024-07-15 23:28:14.998241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-07-15 23:28:14.998271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.801 [2024-07-15 23:28:14.998289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.801 [2024-07-15 23:28:14.998527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.801 [2024-07-15 23:28:14.998782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.801 [2024-07-15 23:28:14.998807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.801 [2024-07-15 23:28:14.998822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.002395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.011688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.012216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.012248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.012271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.012511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.012766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.012790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.012806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.016375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.025651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.026163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.026194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.026213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.026451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.026694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.026718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.026733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.030319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.039606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.040097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.040128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.040145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.040384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.040627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.040651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.040666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.044245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.053521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.054042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.054091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.054109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.054349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.054592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.054621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.054637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.058216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.067508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.068020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.068069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.068087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.068325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.068568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.068592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.068607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.072190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.081462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.081999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.082052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.082069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.082308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.082551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.082575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.082590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.086171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.095448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.095950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.095998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.096015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:24:59.802 [2024-07-15 23:28:15.096254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:24:59.802 [2024-07-15 23:28:15.096496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.802 [2024-07-15 23:28:15.096520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.802 [2024-07-15 23:28:15.096536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.802 [2024-07-15 23:28:15.100116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.802 [2024-07-15 23:28:15.109408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.802 [2024-07-15 23:28:15.109922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-07-15 23:28:15.109954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:24:59.802 [2024-07-15 23:28:15.109971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.110210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.110453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.110477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.110492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.114077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.123355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.123851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.123882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.123900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.124138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.124381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.124405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.124420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.128003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.137287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.137734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.137772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.137789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.138027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.138271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.138295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.138310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.141892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.151173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.151617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.151648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.151665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.151918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.152161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.152185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.152200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.155777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.165055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.165547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.165578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.165595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.165844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.166087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.166111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.166126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.169694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.179002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.179525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.179556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.179574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.179825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.180069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.180093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.180108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.061 [2024-07-15 23:28:15.183677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.061 [2024-07-15 23:28:15.192963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.061 [2024-07-15 23:28:15.193480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-15 23:28:15.193511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.061 [2024-07-15 23:28:15.193528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.061 [2024-07-15 23:28:15.193778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.061 [2024-07-15 23:28:15.194021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.061 [2024-07-15 23:28:15.194045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.061 [2024-07-15 23:28:15.194066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.197637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.206927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.207428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.207459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.207476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.207714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.207972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.207997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.208012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.211581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.220860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.221313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.221365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.221382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.221620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.221876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.221901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.221916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.225487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.234774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.235253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.235284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.235301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.235539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.235795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.235820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.235835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.239406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.248685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.249216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.249247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.249265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.249503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.249757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.249782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.249796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.253365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.262642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.263190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.263221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.263239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.263476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.263719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.263753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.263771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.267343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.276624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.277144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.277175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.277192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.277430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.277673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.277696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.277712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.281293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.290576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.291085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.291137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.291154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.291398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.291641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.291665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.291680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.295258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.304535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.305032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.305083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.305101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.305339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.305581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.305605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.305620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.309200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.318478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.319011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.319061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.319078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.319316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.319559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.319582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.319597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.323178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.332460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.332983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.333014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.333031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.333269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.333512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.333536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.062 [2024-07-15 23:28:15.333556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.062 [2024-07-15 23:28:15.337140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.062 [2024-07-15 23:28:15.346441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.062 [2024-07-15 23:28:15.346858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-15 23:28:15.346889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.062 [2024-07-15 23:28:15.346907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.062 [2024-07-15 23:28:15.347145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.062 [2024-07-15 23:28:15.347389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.062 [2024-07-15 23:28:15.347413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.063 [2024-07-15 23:28:15.347428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.063 [2024-07-15 23:28:15.351008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.063 [2024-07-15 23:28:15.360290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.063 [2024-07-15 23:28:15.360713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-15 23:28:15.360752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.063 [2024-07-15 23:28:15.360772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.063 [2024-07-15 23:28:15.361011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.063 [2024-07-15 23:28:15.361254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.063 [2024-07-15 23:28:15.361277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.063 [2024-07-15 23:28:15.361293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.063 [2024-07-15 23:28:15.364868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.063 [2024-07-15 23:28:15.374141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.063 [2024-07-15 23:28:15.374572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-15 23:28:15.374626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.063 [2024-07-15 23:28:15.374643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.063 [2024-07-15 23:28:15.374891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.063 [2024-07-15 23:28:15.375135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.063 [2024-07-15 23:28:15.375159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.063 [2024-07-15 23:28:15.375174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.322 [2024-07-15 23:28:15.378757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.322 [2024-07-15 23:28:15.388055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.322 [2024-07-15 23:28:15.388494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 23:28:15.388554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.322 [2024-07-15 23:28:15.388573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.322 [2024-07-15 23:28:15.388833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.322 [2024-07-15 23:28:15.389062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.322 [2024-07-15 23:28:15.389097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.322 [2024-07-15 23:28:15.389113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.322 [2024-07-15 23:28:15.392693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.322 [2024-07-15 23:28:15.402084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.322 [2024-07-15 23:28:15.402526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 23:28:15.402557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.322 [2024-07-15 23:28:15.402574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.322 [2024-07-15 23:28:15.402823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.322 [2024-07-15 23:28:15.403067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.322 [2024-07-15 23:28:15.403091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.322 [2024-07-15 23:28:15.403106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.322 [2024-07-15 23:28:15.406679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.322 [2024-07-15 23:28:15.415996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.322 [2024-07-15 23:28:15.416458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 23:28:15.416484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.322 [2024-07-15 23:28:15.416498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.322 [2024-07-15 23:28:15.416762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.322 [2024-07-15 23:28:15.417008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.322 [2024-07-15 23:28:15.417040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.322 [2024-07-15 23:28:15.417055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.322 [2024-07-15 23:28:15.420629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.322 [2024-07-15 23:28:15.430053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.322 [2024-07-15 23:28:15.430461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 23:28:15.430493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.322 [2024-07-15 23:28:15.430510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.322 [2024-07-15 23:28:15.430771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.322 [2024-07-15 23:28:15.431036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.322 [2024-07-15 23:28:15.431061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.431076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.434661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.443685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.444067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.444096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.444113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.444335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.444561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.444584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.444598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.447856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.456963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.457395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.457436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.457630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.457863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.457884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.457898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.460965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.470373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.470752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.470804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.470820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.471056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.471271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.471291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.471303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.474383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.483751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.484191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.484231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.484246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.484455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.484661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.484680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.484692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.487843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.497193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.497567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.497607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.497621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.497861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.498082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.498110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.498122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.501121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.510561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.510924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.510965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.510980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.511174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.511374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.511393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.511405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.514395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.523946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.524404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.524444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.524463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.524659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.524892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.524914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.524927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.527888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.537291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.537705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.537729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.537766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.537969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.538186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.323 [2024-07-15 23:28:15.538206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.323 [2024-07-15 23:28:15.538219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.323 [2024-07-15 23:28:15.541228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.323 [2024-07-15 23:28:15.550531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.323 [2024-07-15 23:28:15.550995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 23:28:15.551026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.323 [2024-07-15 23:28:15.551041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.323 [2024-07-15 23:28:15.551235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.323 [2024-07-15 23:28:15.551434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.551453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.551465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.554526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.563715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.564209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.564264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.564459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.564657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.564681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.564695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.567701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.577073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.577559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.577599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.577614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.577854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.578067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.578088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.578101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.581099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.590263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.590694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.590733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.590757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.590979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.591200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.591220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.591232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.594212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.603597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.604103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.604127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.604142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.604352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.604551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.604570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.604582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.607603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.616916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.617366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.617391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.617420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.617615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.617860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.617881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.617895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.620894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.324 [2024-07-15 23:28:15.630193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.324 [2024-07-15 23:28:15.630684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 23:28:15.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.324 [2024-07-15 23:28:15.630747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.324 [2024-07-15 23:28:15.630972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.324 [2024-07-15 23:28:15.631193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.324 [2024-07-15 23:28:15.631213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.324 [2024-07-15 23:28:15.631226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.324 [2024-07-15 23:28:15.634490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.583 [2024-07-15 23:28:15.643772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.583 [2024-07-15 23:28:15.644322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.583 [2024-07-15 23:28:15.644347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.583 [2024-07-15 23:28:15.644376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.583 [2024-07-15 23:28:15.644615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.583 [2024-07-15 23:28:15.644843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.583 [2024-07-15 23:28:15.644865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.583 [2024-07-15 23:28:15.644879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.583 [2024-07-15 23:28:15.648260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.583 [2024-07-15 23:28:15.657152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.583 [2024-07-15 23:28:15.657618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.583 [2024-07-15 23:28:15.657657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.583 [2024-07-15 23:28:15.657672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.583 [2024-07-15 23:28:15.657925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.583 [2024-07-15 23:28:15.658185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.583 [2024-07-15 23:28:15.658204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.583 [2024-07-15 23:28:15.658217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.583 [2024-07-15 23:28:15.661256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.583 [2024-07-15 23:28:15.670427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.583 [2024-07-15 23:28:15.670884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.583 [2024-07-15 23:28:15.670924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.583 [2024-07-15 23:28:15.670940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.583 [2024-07-15 23:28:15.671152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.583 [2024-07-15 23:28:15.671352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.583 [2024-07-15 23:28:15.671371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.583 [2024-07-15 23:28:15.671383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.583 [2024-07-15 23:28:15.674359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.583 [2024-07-15 23:28:15.683675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.583 [2024-07-15 23:28:15.684138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.583 [2024-07-15 23:28:15.684185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.684199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.684408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.684607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.684627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.684639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.687666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.696968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.697458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.697495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.697510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.697706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.697955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.697976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.697995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.700937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.710244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.710693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.710731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.710754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.710976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.711197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.711217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.711229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.714208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.723453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.723947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.723986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.724001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.724196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.724395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.724414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.724427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.727416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.736723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.737158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.737197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.737211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.737421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.737621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.737640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.737653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.740674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.749960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.750467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.750491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.750520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.750715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.750964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.750986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.750999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.753995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.763151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.763591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.763630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.763644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.763886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.764125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.764145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.764157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.767137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.776413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.776881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.776907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.776936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.777148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.777348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.777367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.777379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.780373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.789702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.790180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.790205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.790219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.790442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.790641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.790660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.790673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.793644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.803013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.803459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.803499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.803513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.803745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.803972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.803993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.804007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.807003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.584 [2024-07-15 23:28:15.816353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.584 [2024-07-15 23:28:15.816855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.584 [2024-07-15 23:28:15.816896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.584 [2024-07-15 23:28:15.816911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.584 [2024-07-15 23:28:15.817126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.584 [2024-07-15 23:28:15.817338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.584 [2024-07-15 23:28:15.817358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.584 [2024-07-15 23:28:15.817371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.584 [2024-07-15 23:28:15.820373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.829656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.830096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.830123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.830153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.830368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.585 [2024-07-15 23:28:15.830586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.585 [2024-07-15 23:28:15.830605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.585 [2024-07-15 23:28:15.830622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.585 [2024-07-15 23:28:15.833760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.842979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.843411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.843435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.843449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.843659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.585 [2024-07-15 23:28:15.843885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.585 [2024-07-15 23:28:15.843906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.585 [2024-07-15 23:28:15.843919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.585 [2024-07-15 23:28:15.846941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.856280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.856703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.856728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.856763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.856988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.585 [2024-07-15 23:28:15.857227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.585 [2024-07-15 23:28:15.857247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.585 [2024-07-15 23:28:15.857259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.585 [2024-07-15 23:28:15.860247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.869567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.869974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.870000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.870015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.870226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.585 [2024-07-15 23:28:15.870425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.585 [2024-07-15 23:28:15.870444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.585 [2024-07-15 23:28:15.870457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.585 [2024-07-15 23:28:15.873468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.882980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.883447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.883494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.883509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.883734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.585 [2024-07-15 23:28:15.883969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.585 [2024-07-15 23:28:15.883991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.585 [2024-07-15 23:28:15.884004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.585 [2024-07-15 23:28:15.887228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.585 [2024-07-15 23:28:15.896523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.585 [2024-07-15 23:28:15.897036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.585 [2024-07-15 23:28:15.897069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.585 [2024-07-15 23:28:15.897084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.585 [2024-07-15 23:28:15.897314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.897533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.897554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.897568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.900941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.909906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.910351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.910395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.910409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.910604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.910846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.910868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.910881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.913885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.923196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.923673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.923726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.923955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.924182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.924202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.924215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.927158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.936475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.936969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.937009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.937023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.937235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.937434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.937453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.937466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.940455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.949744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.950202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.950227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.950255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.950450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.950649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.950668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.950680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.953686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.962989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.963492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.963517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.963546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.963762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.963990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.964010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.964024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.967044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.976277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.976758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.976784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.976813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.977014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.977228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.977248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.977260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.980282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:15.989590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:15.990110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:15.990136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:15.990150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:15.990345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:15.990545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:15.990564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:15.990577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:15.993593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:16.002966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:16.003428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.844 [2024-07-15 23:28:16.003468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.844 [2024-07-15 23:28:16.003483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.844 [2024-07-15 23:28:16.003678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.844 [2024-07-15 23:28:16.003925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.844 [2024-07-15 23:28:16.003947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.844 [2024-07-15 23:28:16.003960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.844 [2024-07-15 23:28:16.006960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.844 [2024-07-15 23:28:16.016261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.844 [2024-07-15 23:28:16.016750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.016776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.016810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.017012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.017227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.017247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.017259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.020278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.029536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.030029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.030069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.030083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.030292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.030491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.030510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.030523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.033542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.042750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.043218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.043253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.043283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.043478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.043676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.043696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.043709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.046710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.056042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.056537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.056562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.056590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.056813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.057025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.057065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.057078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.060075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.069359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.069830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.069884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.070098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.070297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.070317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.070329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.073337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.082636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.083126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.083151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.083180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.083375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.083575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.083594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.083607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.086627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.096057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.096519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.096559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.096573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.096811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.097023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.097059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.097072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.100070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.109397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.109882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.109907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.109936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.110131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.110330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.110349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.110362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.113367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.122650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.123129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.123153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.123167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.123376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.123575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.123594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.123606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.126628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.135975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.136403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.136442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.136458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.136666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.136921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.136943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.136957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.139969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.845 [2024-07-15 23:28:16.149242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.845 [2024-07-15 23:28:16.149772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.845 [2024-07-15 23:28:16.149814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:00.845 [2024-07-15 23:28:16.149829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:00.845 [2024-07-15 23:28:16.150043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:00.845 [2024-07-15 23:28:16.150275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.845 [2024-07-15 23:28:16.150297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.845 [2024-07-15 23:28:16.150311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.845 [2024-07-15 23:28:16.153829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.162531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.163081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.163108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.163139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.163364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.163604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.163623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.163636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.166642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.175770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.176276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.176315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.176330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.176525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.176748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.176768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.176796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.179798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.189158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.189600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.189638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.189653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.189895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.190122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.190142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.190160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.193156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.202515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.202994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.203034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.203050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.203261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.203460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.203480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.203492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.206511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.215820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.216309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.216334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.216362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.216557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.216781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.216802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.216816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.219819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.105 [2024-07-15 23:28:16.229117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.105 [2024-07-15 23:28:16.229595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.105 [2024-07-15 23:28:16.229634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.105 [2024-07-15 23:28:16.229649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.105 [2024-07-15 23:28:16.229892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.105 [2024-07-15 23:28:16.230131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.105 [2024-07-15 23:28:16.230151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.105 [2024-07-15 23:28:16.230164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.105 [2024-07-15 23:28:16.233148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.242384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.242873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.242913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.242929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.243142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.243341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.243360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.243373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.246365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.255646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.256115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.256140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.256153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.256363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.256562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.256581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.256594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.259613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.268922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.269419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.269472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.269666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.269914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.269936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.269949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.272947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.282819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.283323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.283353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.283370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.283608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.283870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.283892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.283906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.287441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.296585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.297118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.297166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.297183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.297421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.297664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.297688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.297704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.301269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.310546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.311047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.311078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.311095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.311334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.311577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.311600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.311615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.315191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.324468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.324953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.324984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.325002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.325239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.325483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.325507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.325522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.329111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.338383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.338884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.338916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.338933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.339171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.339414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.339438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.339453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.343035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.352310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.352795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.352831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.352848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.353086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.353329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.353353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.353368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.356953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.366231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.366661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.366698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.366716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.366965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.367209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.367233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.367248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.370826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.106 [2024-07-15 23:28:16.380098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.106 [2024-07-15 23:28:16.380608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.106 [2024-07-15 23:28:16.380639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.106 [2024-07-15 23:28:16.380662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.106 [2024-07-15 23:28:16.380910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.106 [2024-07-15 23:28:16.381154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.106 [2024-07-15 23:28:16.381177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.106 [2024-07-15 23:28:16.381192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.106 [2024-07-15 23:28:16.384771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.107 [2024-07-15 23:28:16.394049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.107 [2024-07-15 23:28:16.394544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.107 [2024-07-15 23:28:16.394575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.107 [2024-07-15 23:28:16.394592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.107 [2024-07-15 23:28:16.394841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.107 [2024-07-15 23:28:16.395085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.107 [2024-07-15 23:28:16.395108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.107 [2024-07-15 23:28:16.395123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.107 [2024-07-15 23:28:16.398692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.107 [2024-07-15 23:28:16.407976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.107 [2024-07-15 23:28:16.408464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.107 [2024-07-15 23:28:16.408490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.107 [2024-07-15 23:28:16.408520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.107 [2024-07-15 23:28:16.408772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.107 [2024-07-15 23:28:16.408992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.107 [2024-07-15 23:28:16.409014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.107 [2024-07-15 23:28:16.409027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.107 [2024-07-15 23:28:16.412520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.365 [2024-07-15 23:28:16.421884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.365 [2024-07-15 23:28:16.422441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.365 [2024-07-15 23:28:16.422490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.365 [2024-07-15 23:28:16.422507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.365 [2024-07-15 23:28:16.422756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.365 [2024-07-15 23:28:16.422990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.365 [2024-07-15 23:28:16.423011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.365 [2024-07-15 23:28:16.423039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.365 [2024-07-15 23:28:16.426613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.365 [2024-07-15 23:28:16.435798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.365 [2024-07-15 23:28:16.436262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.365 [2024-07-15 23:28:16.436293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.365 [2024-07-15 23:28:16.436311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.365 [2024-07-15 23:28:16.436548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.365 [2024-07-15 23:28:16.436816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.365 [2024-07-15 23:28:16.436838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.365 [2024-07-15 23:28:16.436851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.365 [2024-07-15 23:28:16.440399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.365 [2024-07-15 23:28:16.449673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.365 [2024-07-15 23:28:16.450216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.365 [2024-07-15 23:28:16.450266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.365 [2024-07-15 23:28:16.450283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.365 [2024-07-15 23:28:16.450521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.365 [2024-07-15 23:28:16.450776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.365 [2024-07-15 23:28:16.450800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.365 [2024-07-15 23:28:16.450815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.365 [2024-07-15 23:28:16.454504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.365 [2024-07-15 23:28:16.463565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.365 [2024-07-15 23:28:16.464122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.464173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.464190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.464429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.464671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.464695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.464711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.468293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.477569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.478090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.478121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.478138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.478376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.478619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.478642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.478658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.482239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.491530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.491964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.491995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.492013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.492252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.492495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.492519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.492534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.496112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.505382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.505807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.505839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.505856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.506095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.506338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.506362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.506378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.509960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.519260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.519649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.519681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.519705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.519953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.520198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.520222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.520237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.523817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.533309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.533712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.533752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.533773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.534012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.534255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.534279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.534294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.537875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.546687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.547113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.547139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.547153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.547348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.547547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.547566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.547578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.550757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.560681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.561097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.561150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.561167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.561405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.561649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.561678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.561694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.565270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.574553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.574931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.574962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.574980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.575218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.575461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.575485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.575499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.579085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.588572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.588965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.588999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.589016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.589255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.589498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.589522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.589537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.593134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.602154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.602538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.366 [2024-07-15 23:28:16.602566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.366 [2024-07-15 23:28:16.602581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.366 [2024-07-15 23:28:16.602804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.366 [2024-07-15 23:28:16.603023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.366 [2024-07-15 23:28:16.603045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.366 [2024-07-15 23:28:16.603059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.366 [2024-07-15 23:28:16.606249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.366 [2024-07-15 23:28:16.615516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.366 [2024-07-15 23:28:16.615935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.367 [2024-07-15 23:28:16.615961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.367 [2024-07-15 23:28:16.615976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.367 [2024-07-15 23:28:16.616238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.367 [2024-07-15 23:28:16.616481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.367 [2024-07-15 23:28:16.616505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.367 [2024-07-15 23:28:16.616521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.367 [2024-07-15 23:28:16.620095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.367 [2024-07-15 23:28:16.629403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.367 [2024-07-15 23:28:16.629812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.367 [2024-07-15 23:28:16.629854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.367 [2024-07-15 23:28:16.629870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.367 [2024-07-15 23:28:16.630126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.367 [2024-07-15 23:28:16.630370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.367 [2024-07-15 23:28:16.630393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.367 [2024-07-15 23:28:16.630409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.367 [2024-07-15 23:28:16.633785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.367 [2024-07-15 23:28:16.643300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.367 [2024-07-15 23:28:16.643745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.367 [2024-07-15 23:28:16.643776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.367 [2024-07-15 23:28:16.643794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.367 [2024-07-15 23:28:16.644032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.367 [2024-07-15 23:28:16.644276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.367 [2024-07-15 23:28:16.644299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.367 [2024-07-15 23:28:16.644314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.367 [2024-07-15 23:28:16.647889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.367 [2024-07-15 23:28:16.657162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.367 [2024-07-15 23:28:16.657621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.367 [2024-07-15 23:28:16.657652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.367 [2024-07-15 23:28:16.657669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.367 [2024-07-15 23:28:16.657921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.367 [2024-07-15 23:28:16.658165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.367 [2024-07-15 23:28:16.658189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.367 [2024-07-15 23:28:16.658204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.367 [2024-07-15 23:28:16.661701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.367 [2024-07-15 23:28:16.671052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.367 [2024-07-15 23:28:16.671537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.367 [2024-07-15 23:28:16.671587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.367 [2024-07-15 23:28:16.671604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.367 [2024-07-15 23:28:16.671858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.367 [2024-07-15 23:28:16.672085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.367 [2024-07-15 23:28:16.672122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.367 [2024-07-15 23:28:16.672138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.367 [2024-07-15 23:28:16.675686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.625 [2024-07-15 23:28:16.685090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.625 [2024-07-15 23:28:16.685579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-15 23:28:16.685629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.625 [2024-07-15 23:28:16.685646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.625 [2024-07-15 23:28:16.685897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.625 [2024-07-15 23:28:16.686149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.625 [2024-07-15 23:28:16.686173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.625 [2024-07-15 23:28:16.686189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.625 [2024-07-15 23:28:16.689767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.625 [2024-07-15 23:28:16.698988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.625 [2024-07-15 23:28:16.699472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-15 23:28:16.699524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.625 [2024-07-15 23:28:16.699541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.625 [2024-07-15 23:28:16.699795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.625 [2024-07-15 23:28:16.700039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.625 [2024-07-15 23:28:16.700063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.625 [2024-07-15 23:28:16.700083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.703651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.712959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.713488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.713540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.713557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.713807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.714050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.714074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.714089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.717658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.726938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.727391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.727422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.727439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.727677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.727931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.727956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.727971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.731546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.740826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.741298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.741328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.741346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.741584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.741838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.741863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.741878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.745449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.754723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.755250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.755304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.755322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.755561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.755816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.755841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.755856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.759425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.768695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.769200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.769231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.769249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.769487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.769730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.769764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.769780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.773350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.782650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.783158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.783189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.783206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.783444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.783687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.783711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.783726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.787308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.796579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.797092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.797123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.797141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.797379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.797630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.797654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.797669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.801248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.810526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.810998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.811028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.811045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.811284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.811527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.811550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.811565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.815146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.824424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.824980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.825033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.825051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.825289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.825533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.825556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.825571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.829152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.838426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.838952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.838984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.839001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.839241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.839484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.839512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.626 [2024-07-15 23:28:16.839527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.626 [2024-07-15 23:28:16.843120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.626 [2024-07-15 23:28:16.852413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.626 [2024-07-15 23:28:16.852858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-15 23:28:16.852889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.626 [2024-07-15 23:28:16.852906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.626 [2024-07-15 23:28:16.853151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.626 [2024-07-15 23:28:16.853394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.626 [2024-07-15 23:28:16.853417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.853432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.857006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.866284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.866835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.866868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.866885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.867124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.867367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.867390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.867405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.870981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.880257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.880724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.880763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.880781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.881020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.881263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.881286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.881302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.884881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.894159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.894689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.894719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.894753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.894995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.895238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.895263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.895279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.898859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.908154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.908657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.908687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.908705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.908966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.909218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.909242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.909257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.912958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.921984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.922455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.922486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.922504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.922753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.922988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.923009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.923037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.627 [2024-07-15 23:28:16.926581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.627 [2024-07-15 23:28:16.935830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.627 [2024-07-15 23:28:16.936307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 23:28:16.936338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.627 [2024-07-15 23:28:16.936356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.627 [2024-07-15 23:28:16.936595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.627 [2024-07-15 23:28:16.936851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.627 [2024-07-15 23:28:16.936881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.627 [2024-07-15 23:28:16.936897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:16.940706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:16.949775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:16.950306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:16.950337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:16.950355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:16.950593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:16.950848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:16.950874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:16.950889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:16.954460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:16.963733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:16.964243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:16.964274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:16.964291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:16.964529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:16.964785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:16.964810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:16.964825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:16.968398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:16.977676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:16.978167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:16.978198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:16.978216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:16.978454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:16.978697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:16.978720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:16.978736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:16.982320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:16.991599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:16.992115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:16.992146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:16.992164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:16.992402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:16.992646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:16.992669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:16.992685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:16.996268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.005542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.006045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.006107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.006125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.006363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.006606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.006630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.006645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.010224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.019496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.020027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.020078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.020096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.020334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.020577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.020601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.020616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.024206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.033482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.034019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.034051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.034073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.034313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.034556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.034579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.034595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.038174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.047448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.047912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.047961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.048200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.048442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.048466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.048482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.052063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.061335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.061868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.061900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.061917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.062156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.062398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.062422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.062437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.066020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.075295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.075799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.075830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.075847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.076085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.886 [2024-07-15 23:28:17.076328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-07-15 23:28:17.076358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-07-15 23:28:17.076374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-07-15 23:28:17.079956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-07-15 23:28:17.089233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-07-15 23:28:17.089751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-07-15 23:28:17.089781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-07-15 23:28:17.089798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.886 [2024-07-15 23:28:17.090037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.090279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.090303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.090319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.093900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.103177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.103695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.103726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.103754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.103995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.104238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.104261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.104277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.107857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.117125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.117645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.117676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.117693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.117942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.118186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.118209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.118224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.121801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.131077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.131604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.131635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.131653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.131901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.132146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.132169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.132185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.135763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.145048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.145573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.145603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.145620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.145871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.146115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.146140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.146155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.149727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.159036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.159573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.159613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.159628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.159858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.160094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.160115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.160129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.163706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.173028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.173487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.173517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.173535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.173789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.174032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.174056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.174071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.177633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.887 [2024-07-15 23:28:17.186909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.887 [2024-07-15 23:28:17.187452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-07-15 23:28:17.187500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:01.887 [2024-07-15 23:28:17.187517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:01.887 [2024-07-15 23:28:17.187767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:01.887 [2024-07-15 23:28:17.188011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.887 [2024-07-15 23:28:17.188034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.887 [2024-07-15 23:28:17.188049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.887 [2024-07-15 23:28:17.191615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.200899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.201403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.201434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.201451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.201689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.201941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.201966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.201981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.205553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.214835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.215309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.215339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.215356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.215594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.215849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.215874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.215894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.219463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.228749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.229229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.229259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.229277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.229515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.229769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.229793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.229808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.233382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.242652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.243212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.243263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.243280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.243519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.243773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.243797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.243812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.247381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.256658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.257192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.257223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.257240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.257479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.257722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.257757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.257773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.261343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.270612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.271120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.271155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.271173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.271412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.271654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.271678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.271693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.275271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.284573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.285112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.285143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.285161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.285399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.285679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.285699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.285713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.288893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.298513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.298952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.298983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.299001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.299238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.299481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.299505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.299520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.303100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.312380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.312827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.312859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.312876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.313114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.313362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.313386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.313401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.316982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.326263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.326710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.326750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.326770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.327009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.327252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.327276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.327291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.330877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.340154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.340576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.340606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.340623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.340873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.341116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.341140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.341155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.344730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.354048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.354467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.354498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.354516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.354768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.355011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.355035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.355050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.358628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.367916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.368319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.368350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.368368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.368606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.368862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.368886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.368901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.372473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.146 [2024-07-15 23:28:17.381758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.146 [2024-07-15 23:28:17.382181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.146 [2024-07-15 23:28:17.382213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.146 [2024-07-15 23:28:17.382230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.146 [2024-07-15 23:28:17.382468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.146 [2024-07-15 23:28:17.382711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.146 [2024-07-15 23:28:17.382735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.146 [2024-07-15 23:28:17.382764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.146 [2024-07-15 23:28:17.386338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.147 [2024-07-15 23:28:17.395610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.147 [2024-07-15 23:28:17.396030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.147 [2024-07-15 23:28:17.396060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.147 [2024-07-15 23:28:17.396077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.147 [2024-07-15 23:28:17.396315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.147 [2024-07-15 23:28:17.396557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.147 [2024-07-15 23:28:17.396581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.147 [2024-07-15 23:28:17.396596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.147 [2024-07-15 23:28:17.400178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.147 [2024-07-15 23:28:17.409453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.147 [2024-07-15 23:28:17.409895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.147 [2024-07-15 23:28:17.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.147 [2024-07-15 23:28:17.409961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.147 [2024-07-15 23:28:17.410183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.147 [2024-07-15 23:28:17.410424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.147 [2024-07-15 23:28:17.410445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.147 [2024-07-15 23:28:17.410459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.147 [2024-07-15 23:28:17.414076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.147 [2024-07-15 23:28:17.423433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.147 [2024-07-15 23:28:17.423847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.147 [2024-07-15 23:28:17.423879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.147 [2024-07-15 23:28:17.423897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.147 [2024-07-15 23:28:17.424136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.147 [2024-07-15 23:28:17.424380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.147 [2024-07-15 23:28:17.424403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.147 [2024-07-15 23:28:17.424419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.147 [2024-07-15 23:28:17.428003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.147 [2024-07-15 23:28:17.437286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.147 [2024-07-15 23:28:17.437696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.147 [2024-07-15 23:28:17.437727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.147 [2024-07-15 23:28:17.437755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.147 [2024-07-15 23:28:17.437996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.147 [2024-07-15 23:28:17.438239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.147 [2024-07-15 23:28:17.438263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.147 [2024-07-15 23:28:17.438278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.147 [2024-07-15 23:28:17.441854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.147 [2024-07-15 23:28:17.451122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.147 [2024-07-15 23:28:17.451559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.147 [2024-07-15 23:28:17.451590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.147 [2024-07-15 23:28:17.451607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.147 [2024-07-15 23:28:17.451857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.147 [2024-07-15 23:28:17.452101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.147 [2024-07-15 23:28:17.452130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.147 [2024-07-15 23:28:17.452146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.147 [2024-07-15 23:28:17.455713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.464986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.465401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.465431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.465449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.465687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.465940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.465965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.406 [2024-07-15 23:28:17.465980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.406 [2024-07-15 23:28:17.469550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.478823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.479212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.479243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.479261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.479499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.479752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.479776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.406 [2024-07-15 23:28:17.479791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.406 [2024-07-15 23:28:17.483358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.492845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.493269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.493300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.493317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.493555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.493808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.493832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.406 [2024-07-15 23:28:17.493847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.406 [2024-07-15 23:28:17.497419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.506696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.507088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.507119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.507137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.507375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.507617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.507641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.406 [2024-07-15 23:28:17.507656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.406 [2024-07-15 23:28:17.511235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.520709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.521108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.521139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.521156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.521394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.521637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.521660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.406 [2024-07-15 23:28:17.521675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.406 [2024-07-15 23:28:17.525256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.406 [2024-07-15 23:28:17.534756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.406 [2024-07-15 23:28:17.535170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.406 [2024-07-15 23:28:17.535201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.406 [2024-07-15 23:28:17.535218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.406 [2024-07-15 23:28:17.535456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.406 [2024-07-15 23:28:17.535699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.406 [2024-07-15 23:28:17.535723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.535747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.539319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.548593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.549024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.549055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.549073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.549317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.549560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.549584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.549599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.553179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.562464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.562890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.562921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.562939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.563177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.563420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.563444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.563459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.567038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.576307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.576766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.576796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.576813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.577051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.577294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.577319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.577334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.580911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.590184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.590610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.590640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.590657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.590906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.591149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.591173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.591194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.594771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.604044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.604459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.604490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.604507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.604755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.604999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.605022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.605037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.608608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.617889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.618306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.618336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.618353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.618591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.618845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.618870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.618885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.622452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.631719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.632108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.632139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.632156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.632394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.632637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.632662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.632677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.636258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.645734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.646164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.646195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.646212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.646450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.646693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.646716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.646731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.650312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.659591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.660022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.660054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.660071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.660309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.660552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.660576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.660593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.664173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.673450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.673846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.673878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.673895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.674133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.674376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.407 [2024-07-15 23:28:17.674400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.407 [2024-07-15 23:28:17.674415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.407 [2024-07-15 23:28:17.677991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.407 [2024-07-15 23:28:17.687290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.407 [2024-07-15 23:28:17.687682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.407 [2024-07-15 23:28:17.687713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.407 [2024-07-15 23:28:17.687731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.407 [2024-07-15 23:28:17.687984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.407 [2024-07-15 23:28:17.688227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.408 [2024-07-15 23:28:17.688251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.408 [2024-07-15 23:28:17.688266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.408 [2024-07-15 23:28:17.691844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.408 [2024-07-15 23:28:17.701326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.408 [2024-07-15 23:28:17.701753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.408 [2024-07-15 23:28:17.701785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.408 [2024-07-15 23:28:17.701802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.408 [2024-07-15 23:28:17.702040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.408 [2024-07-15 23:28:17.702283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.408 [2024-07-15 23:28:17.702307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.408 [2024-07-15 23:28:17.702322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.408 [2024-07-15 23:28:17.705907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.408 [2024-07-15 23:28:17.715197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.408 [2024-07-15 23:28:17.715614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.408 [2024-07-15 23:28:17.715645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.408 [2024-07-15 23:28:17.715663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.408 [2024-07-15 23:28:17.715911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.408 [2024-07-15 23:28:17.716154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.408 [2024-07-15 23:28:17.716178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.408 [2024-07-15 23:28:17.716193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.719770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.729055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.729477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.729508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.729525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.729774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.730018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.730042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.730062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.733638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.742927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.743333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.743364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.743381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.743620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.743874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.743898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.743913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.747479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.756769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.757157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.757187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.757205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.757442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.757686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.757710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.757725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.761306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.770806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.771205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.771235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.771252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.771491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.771733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.771767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.771783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.775351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.784841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.785256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.785291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.785310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.785548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.785800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.785825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.785840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.789409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.798698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.799145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.799199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.799216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.799454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.799697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.799721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.799744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.803314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.812592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.813023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.813083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.813101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.813339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.813582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.813606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.813621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.817198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.826465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.826887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.826918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.826936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.827173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.827422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.827446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.827461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.831044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.840316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.840743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.840774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.840791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.841030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.841272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.841296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.841311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.844887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.854160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.854583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.854614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.854631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.854881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.855124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.677 [2024-07-15 23:28:17.855148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.677 [2024-07-15 23:28:17.855163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.677 [2024-07-15 23:28:17.858732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.677 [2024-07-15 23:28:17.868016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.677 [2024-07-15 23:28:17.868445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-15 23:28:17.868476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.677 [2024-07-15 23:28:17.868492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.677 [2024-07-15 23:28:17.868730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.677 [2024-07-15 23:28:17.868983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.869007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.869022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.872595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.881880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.882298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.882328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.882346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.882584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.882836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.882860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.882875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2442024 Killed "${NVMF_APP[@]}" "$@" 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.678 [2024-07-15 23:28:17.886443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2443109 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2443109 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2443109 ']' 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.678 23:28:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.678 [2024-07-15 23:28:17.895717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.896163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.896210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.896228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.896466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.896709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.896733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.896758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.900327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.909587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.910046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.910072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.910100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.910337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.910574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.910594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.910607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.914099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.923114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.923491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.923517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.923531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.923752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.923972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.923994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.924007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.927127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.935938] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:25:02.678 [2024-07-15 23:28:17.936013] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.678 [2024-07-15 23:28:17.936532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.936967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.937011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.937240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.937449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.937469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.937482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.940522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.949785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.950281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.950305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.950335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.950529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.950752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.950773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.950786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.953764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 [2024-07-15 23:28:17.963124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.963531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.963556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.963570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.963808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.964029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.964049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.964061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.967096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.678 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.678 [2024-07-15 23:28:17.976941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.678 [2024-07-15 23:28:17.977447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-15 23:28:17.977478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.678 [2024-07-15 23:28:17.977495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.678 [2024-07-15 23:28:17.977734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.678 [2024-07-15 23:28:17.977986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.678 [2024-07-15 23:28:17.978008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.678 [2024-07-15 23:28:17.978021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.678 [2024-07-15 23:28:17.981599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.966 [2024-07-15 23:28:17.990759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.966 [2024-07-15 23:28:17.991251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 23:28:17.991283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.966 [2024-07-15 23:28:17.991301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.966 [2024-07-15 23:28:17.991546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.966 [2024-07-15 23:28:17.991814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.966 [2024-07-15 23:28:17.991836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.966 [2024-07-15 23:28:17.991850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.966 [2024-07-15 23:28:17.995443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.966 [2024-07-15 23:28:18.004560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.966 [2024-07-15 23:28:18.004870] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.966 [2024-07-15 23:28:18.005040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 23:28:18.005068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.966 [2024-07-15 23:28:18.005084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.966 [2024-07-15 23:28:18.005299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.966 [2024-07-15 23:28:18.005518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.005539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.005553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.009210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.018597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.019206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.019247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.019269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.019519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.019792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.019814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.019831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.023316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.032440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.032945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.032971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.033001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.033249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.033493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.033528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.033545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.037073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.046172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.046582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.046614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.046632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.046888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.047131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.047155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.047171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.050672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.059973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.060380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.060412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.060430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.060668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.060920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.060943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.060956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.064461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.073874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.074421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.074462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.074483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.074735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.074983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.075004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.075036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.078547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.087904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.088419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.088450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.088469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.088708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.088962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.088984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.088997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.092536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.101853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.102297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.102328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.102346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.102586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.102860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.102882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.102896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.106390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.115732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.116196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.116227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.116244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.116484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.116727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.116760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.116791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.120312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.122835] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.967 [2024-07-15 23:28:18.122865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.967 [2024-07-15 23:28:18.122893] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.967 [2024-07-15 23:28:18.122906] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.967 [2024-07-15 23:28:18.122922] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.967 [2024-07-15 23:28:18.122971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.967 [2024-07-15 23:28:18.123034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.967 [2024-07-15 23:28:18.123038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.967 [2024-07-15 23:28:18.129240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.129775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.129823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.129842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.130098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.130315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.130336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.967 [2024-07-15 23:28:18.130352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.967 [2024-07-15 23:28:18.133542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.967 [2024-07-15 23:28:18.142781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.967 [2024-07-15 23:28:18.143388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 23:28:18.143439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.967 [2024-07-15 23:28:18.143459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.967 [2024-07-15 23:28:18.143677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.967 [2024-07-15 23:28:18.143936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.967 [2024-07-15 23:28:18.143959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.143977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.147128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.156280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.156802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.156856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.156889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.157128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.157346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.157367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.157384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.160553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.170127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.170692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.170751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.170774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.170998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.171243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.171265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.171282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.174547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.183564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.184111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.184147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.184180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.184402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.184618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.184639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.184655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.187828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.197023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.197613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.197663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.197683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.197931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.198168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.198189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.198206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.201367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.210486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.210929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.210956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.210987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.211230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.211443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.211464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.211477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.214643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.224001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.224447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.224489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.224504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.224748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.224969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.224990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.225004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.228184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.237543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.237994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.238022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.238248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.238461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.238482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.238495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.241695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.251026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.251527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.251569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.251584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.251820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.252039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.252076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.252094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.255255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.264553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.265050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.265076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.265106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.265314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.265525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.265546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.265559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.968 [2024-07-15 23:28:18.268719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.968 [2024-07-15 23:28:18.278130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.968 [2024-07-15 23:28:18.278636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 23:28:18.278664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:02.968 [2024-07-15 23:28:18.278680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:02.968 [2024-07-15 23:28:18.278903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:02.968 [2024-07-15 23:28:18.279122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.968 [2024-07-15 23:28:18.279144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.968 [2024-07-15 23:28:18.279158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.282432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.291699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.292189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.292246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.292460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.292672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.292693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.292706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.295968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.305264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.305761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.305788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.305819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.306033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.306261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.306282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.306295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.309457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.318804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.319291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.319332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.319348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.319556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.319800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.319822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.319836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.323000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.332316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.332759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.332800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.332816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.333045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.333275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.333296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.333309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.336468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.345827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.346328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.346369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.346385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.346602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.346848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.346871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.346885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.350061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.359347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.359878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.359920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.359936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.360164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.360376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.360398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.360411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.363571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.372889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.373415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.373441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.373471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.373679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.373921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.373943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.373957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.377137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.386432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.227 [2024-07-15 23:28:18.386911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.227 [2024-07-15 23:28:18.386953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.227 [2024-07-15 23:28:18.386970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.227 [2024-07-15 23:28:18.387197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.227 [2024-07-15 23:28:18.387409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.227 [2024-07-15 23:28:18.387430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.227 [2024-07-15 23:28:18.387443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.227 [2024-07-15 23:28:18.390687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.227 [2024-07-15 23:28:18.399837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.400305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.400331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.400361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.400568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.400808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.400831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.400844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.404009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.413288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.413781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.413809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.413825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.414039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.414259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.414281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.414294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.417509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.426905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.427423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.427464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.427479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.427686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.427929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.427952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.427966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.431208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.440539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.441018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.441060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.441081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.441316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.441528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.441549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.441562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.444721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.454064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.454544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.454584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.454600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.454835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.455069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.455090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.455103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.458265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.467553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.468044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.468085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.468099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.468321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.468533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.468554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.468567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.471748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.481199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.481694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.481721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.481744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.481961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.482185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.482207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.482221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.485439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.494591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.495098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.495124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.495154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.495361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.495573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.495594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.495607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.498791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.508106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.508621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.508660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.508676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.508914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.509146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.509167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.509180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.512341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.521637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.522162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.522189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.228 [2024-07-15 23:28:18.522219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.228 [2024-07-15 23:28:18.522426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.228 [2024-07-15 23:28:18.522638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.228 [2024-07-15 23:28:18.522659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.228 [2024-07-15 23:28:18.522672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.228 [2024-07-15 23:28:18.525858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.228 [2024-07-15 23:28:18.535146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.228 [2024-07-15 23:28:18.535627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.228 [2024-07-15 23:28:18.535667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.229 [2024-07-15 23:28:18.535683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.229 [2024-07-15 23:28:18.535920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.229 [2024-07-15 23:28:18.536152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.229 [2024-07-15 23:28:18.536174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.229 [2024-07-15 23:28:18.536188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.229 [2024-07-15 23:28:18.539481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.548654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.549169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.549196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.549225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.549433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.549644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.549665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.549678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.552881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.562076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.562544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.562585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.562600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.562838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.563073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.563094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.563107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.566272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.575564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.576061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.576087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.576123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.576332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.576543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.576563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.576576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.579762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.589093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.589597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.589638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.589653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.589890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.590123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.590144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.590157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.593355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.602467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.602943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.602984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.603000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.603224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.603436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.603457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.603470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.606633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.615954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.616473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.616498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.616528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.616759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.616979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.617005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.617019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.620198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.629497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.629925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.629953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.488 [2024-07-15 23:28:18.629969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.488 [2024-07-15 23:28:18.630197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.488 [2024-07-15 23:28:18.630409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.488 [2024-07-15 23:28:18.630430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.488 [2024-07-15 23:28:18.630443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.488 [2024-07-15 23:28:18.633611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.488 [2024-07-15 23:28:18.642976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.488 [2024-07-15 23:28:18.643389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.488 [2024-07-15 23:28:18.643430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.643445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.643666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.643911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.643933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.643947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.647128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.656433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.656847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.656875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.656905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.657132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.657344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.657366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.657379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.660546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.669895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.670301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.670343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.670359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.670588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.670816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.670838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.670853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.674068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.683489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.683897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.683925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.683941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.684156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.684384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.684405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.684419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.687606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.696897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.697323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.697349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.697364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.697587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.697827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.697849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.697863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.701030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.710344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.710749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.710777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.710793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.711013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.711243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.711264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.711278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.714439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.723786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.724169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.724211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.724226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.724448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.724660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.724681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.724694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.727885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.737208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.737595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.737635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.737650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.737901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.738134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.738155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.738168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.741369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.750675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.751105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.751132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.751148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.751355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.751567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.751588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.751606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.754792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.764181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.764576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.764618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.764634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.764887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.765121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.765142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.765155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.768318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.777626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.778033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.778061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.778076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.778283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.778496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.778518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.778531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.781694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.489 [2024-07-15 23:28:18.791249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.489 [2024-07-15 23:28:18.791637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.489 [2024-07-15 23:28:18.791678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.489 [2024-07-15 23:28:18.791693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.489 [2024-07-15 23:28:18.791929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.489 [2024-07-15 23:28:18.792162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.489 [2024-07-15 23:28:18.792183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.489 [2024-07-15 23:28:18.792198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.489 [2024-07-15 23:28:18.795359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.804856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.805282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.805312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.805343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.805551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.805791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.805813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.805828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.809153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.818517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.818905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.818933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.818949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.819176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.819389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.819410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.819423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.822632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.832035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.832401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.832442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.832457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.832679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.832921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.832944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.832958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.836289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.845553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.845963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.846005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.846020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.846228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.846445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.846467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.846480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.849686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.859137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.859622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.859650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.859681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.859903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.860123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.860145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.860159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.863414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.872657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.873019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.873048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.873064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.873286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.873499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.873520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.873533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.876775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.886306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.886702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.886750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.886766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.886996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.887228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.887249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.887262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.890494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.899731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.900144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.900171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.900187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.900395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.900607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.749 [2024-07-15 23:28:18.900628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.749 [2024-07-15 23:28:18.900641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.749 [2024-07-15 23:28:18.903919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.749 [2024-07-15 23:28:18.913248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.749 [2024-07-15 23:28:18.913662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-15 23:28:18.913688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.749 [2024-07-15 23:28:18.913703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.749 [2024-07-15 23:28:18.913956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.749 [2024-07-15 23:28:18.914188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.914209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.914223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.917392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 [2024-07-15 23:28:18.926760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:18.927164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.927192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.927208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 [2024-07-15 23:28:18.927422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.927641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.927662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.927676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 [2024-07-15 23:28:18.930912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.938939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.750 [2024-07-15 23:28:18.940398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:18.940917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.940945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.940961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 [2024-07-15 23:28:18.941176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.941395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.941416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.941430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 [2024-07-15 23:28:18.944732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.953867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:18.954346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.954371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.954401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 [2024-07-15 23:28:18.954602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.954838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.954861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.954874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 [2024-07-15 23:28:18.958037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 [2024-07-15 23:28:18.967346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:18.967859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.967905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.967924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 [2024-07-15 23:28:18.968158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.968386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.968407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.968424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 [2024-07-15 23:28:18.971593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 Malloc0 00:25:03.750 [2024-07-15 23:28:18.981237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.750 [2024-07-15 23:28:18.981803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.981835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.981854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.982077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.982300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.982323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.982341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 [2024-07-15 23:28:18.985600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.994895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:18.995367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-15 23:28:18.995407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ba080 with addr=10.0.0.2, port=4420 00:25:03.750 [2024-07-15 23:28:18.995423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ba080 is same with the state(5) to be set 00:25:03.750 [2024-07-15 23:28:18.995631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ba080 (9): Bad file descriptor 00:25:03.750 [2024-07-15 23:28:18.995874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.750 [2024-07-15 23:28:18.995897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.750 [2024-07-15 23:28:18.995911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.750 23:28:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.750 [2024-07-15 23:28:18.999244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.750 [2024-07-15 23:28:19.000844] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.750 23:28:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.750 23:28:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2442323 00:25:03.750 [2024-07-15 23:28:19.008462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.750 [2024-07-15 23:28:19.038351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.712 00:25:13.712 Latency(us) 00:25:13.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.712 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.712 Verification LBA range: start 0x0 length 0x4000 00:25:13.712 Nvme1n1 : 15.01 6352.81 24.82 10777.49 0.00 7448.36 898.09 17476.27 00:25:13.712 =================================================================================================================== 00:25:13.712 Total : 6352.81 24.82 10777.49 0.00 7448.36 898.09 17476.27 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.712 rmmod nvme_tcp 00:25:13.712 rmmod nvme_fabrics 00:25:13.712 rmmod nvme_keyring 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2443109 ']' 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2443109 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2443109 ']' 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2443109 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2443109 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.712 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.713 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2443109' 00:25:13.713 killing process with pid 2443109 00:25:13.713 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2443109 00:25:13.713 23:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2443109 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.713 23:28:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.086 23:28:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.086 00:25:15.086 real 0m23.344s 00:25:15.086 user 1m3.136s 00:25:15.086 sys 0m4.508s 00:25:15.086 23:28:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:15.086 23:28:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:15.086 ************************************ 00:25:15.086 END TEST nvmf_bdevperf 00:25:15.086 ************************************ 00:25:15.086 23:28:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:15.086 23:28:30 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:15.086 23:28:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:15.086 23:28:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.086 23:28:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.086 ************************************ 00:25:15.086 START TEST nvmf_target_disconnect 00:25:15.086 ************************************ 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:15.086 * Looking for test storage... 00:25:15.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.086 23:28:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.087 23:28:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:16.985 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:16.985 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:16.985 Found net devices under 0000:84:00.0: cvl_0_0 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:16.985 Found net devices under 0000:84:00.1: cvl_0_1 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.985 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:25:16.986 00:25:16.986 --- 10.0.0.2 ping statistics --- 00:25:16.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.986 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:16.986 00:25:16.986 --- 10.0.0.1 ping statistics --- 00:25:16.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.986 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:16.986 23:28:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:17.245 ************************************ 00:25:17.245 START TEST nvmf_target_disconnect_tc1 00:25:17.245 ************************************ 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.245 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.245 [2024-07-15 23:28:32.429089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.245 [2024-07-15 23:28:32.429162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbc930 with addr=10.0.0.2, port=4420 00:25:17.245 [2024-07-15 23:28:32.429198] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.245 [2024-07-15 23:28:32.429220] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.245 [2024-07-15 23:28:32.429235] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:17.245 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:17.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:17.245 Initializing NVMe Controllers 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:17.245 00:25:17.245 real 0m0.096s 00:25:17.245 user 0m0.041s 00:25:17.245 sys 0m0.053s 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.245 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:17.245 ************************************ 00:25:17.245 END TEST nvmf_target_disconnect_tc1 00:25:17.245 ************************************ 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:17.246 ************************************ 00:25:17.246 START TEST nvmf_target_disconnect_tc2 00:25:17.246 ************************************ 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2446246 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2446246 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2446246 ']' 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.246 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.246 [2024-07-15 23:28:32.542587] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:25:17.246 [2024-07-15 23:28:32.542667] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.504 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.504 [2024-07-15 23:28:32.610155] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.504 [2024-07-15 23:28:32.720095] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.504 [2024-07-15 23:28:32.720148] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.504 [2024-07-15 23:28:32.720171] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.504 [2024-07-15 23:28:32.720181] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.504 [2024-07-15 23:28:32.720190] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.504 [2024-07-15 23:28:32.720272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:17.504 [2024-07-15 23:28:32.720338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:17.504 [2024-07-15 23:28:32.720459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:17.504 [2024-07-15 23:28:32.720466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 Malloc0 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 [2024-07-15 23:28:32.901718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 [2024-07-15 23:28:32.929994] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2446303 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:17.761 23:28:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.761 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.658 23:28:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2446246 00:25:19.658 23:28:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 [2024-07-15 23:28:34.954712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Write completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.658 starting I/O failed 00:25:19.658 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 [2024-07-15 23:28:34.955130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Read completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 Write completed with error (sct=0, sc=8) 00:25:19.659 starting I/O failed 00:25:19.659 [2024-07-15 23:28:34.955466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:19.659 [2024-07-15 23:28:34.955697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.955754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.955880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.955906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.956904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.956929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.957055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.957089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.957296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.957319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.957464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.957501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.957656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.957679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.957858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.957883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.958027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.958052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.958275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.958298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.958517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.958540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.958695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.958857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.958882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.959053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.959081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.959312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.959341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.959516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.959544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.959756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.959782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.959922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.959946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.960116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.960152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.960311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.960343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.960573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.960597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.960829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.960854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.960994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.961018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.961224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.961246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.961477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.961500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.961744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.961768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.961902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.962063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.962091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.962243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.962269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.962592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.962639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.962858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.962883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.963028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.963065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.963217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.963251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.963436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.963460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.963617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.963641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.963822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.964083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.964123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.964238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.964265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.964523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.964575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.659 [2024-07-15 23:28:34.964789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.659 [2024-07-15 23:28:34.964815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.659 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.964927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.964953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.965081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.965122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.965308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.965330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.965519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.965552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.965784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.965809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.965952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.965978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.966239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.966281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.966442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.966484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.966660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.966683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.966860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.966885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.967046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.967072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.967282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.967324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.967497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.967538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.967751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.967794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.967940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.967965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.968176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.968214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.968362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.968429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.968628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.968896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.968922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.969071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.969095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.969308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.969335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.969542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.969585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.969783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.969809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.969940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.969964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.970170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.970212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.970380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.970430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.970647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.970673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.970793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.970818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.970957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.970981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.971298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.971339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.971573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.971602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.971812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.971838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.971979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.972005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.972156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.660 [2024-07-15 23:28:34.972322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.660 [2024-07-15 23:28:34.972356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.660 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.972669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.972693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.972855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.972881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.973046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.973071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.973306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.973331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.973502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.973535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.973791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.973832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.973959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.973985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.974124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.974164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.974323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.974348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.974533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.974561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.974735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.936 [2024-07-15 23:28:34.974770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.936 qpair failed and we were unable to recover it. 00:25:19.936 [2024-07-15 23:28:34.974912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.974937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.975099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.975127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.975261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.975289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.975537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.975793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.975818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.975949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.975974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.976215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.976238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.976430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.976464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.976681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.976708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.976875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.976903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.977928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.977954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.978079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.978107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.978252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.978281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.978435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.978463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.978705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.978733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.978889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.978915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.979043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.979068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.979269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.979297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.979451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.979479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.979671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.979699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.979834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.979859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.980966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.980990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.981138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.981165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.981331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.981359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.981578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.981607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.981728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.981762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.981961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.981986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.982113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.982141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.982414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.982442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.982635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.937 [2024-07-15 23:28:34.982663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.937 qpair failed and we were unable to recover it. 00:25:19.937 [2024-07-15 23:28:34.982843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.982867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.983933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.983973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.984139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.984172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.984379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.984403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.984585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.984613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.984789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.984818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.984968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.984992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.985227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.985255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.985405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.985434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.985611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.985634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.985809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.985838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.985948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.985977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.986124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.986162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.986390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.986418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.986630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.986659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.986843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.986867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.987889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.987927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.988912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.988936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.989124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.989152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.989301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.989330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.989500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.989522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.989778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.989806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.989974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.990138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.990175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.990361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.990389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.990537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.938 [2024-07-15 23:28:34.990565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.938 qpair failed and we were unable to recover it. 00:25:19.938 [2024-07-15 23:28:34.990689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.990713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.990839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.990864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.990989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.991017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.991285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.991308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.991533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.991561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.991678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.991706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.991857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.991885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.992011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.992061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.992244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.992272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.992440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.992463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.992671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.992699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.992845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.992873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.993964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.993988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.994171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.994194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.994419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.994447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.994566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.994594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.994807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.994846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.994959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.994987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.995224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.995531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.995699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.995885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.995995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.996036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.996253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.996280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.996446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.996469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.996708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.996736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.996891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.996919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.997146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.997169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.997354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.997382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.997555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.997583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.997719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.997747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.997894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.997934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.998077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.998105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.998250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.998287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.939 qpair failed and we were unable to recover it. 00:25:19.939 [2024-07-15 23:28:34.998524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.939 [2024-07-15 23:28:34.998553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.998705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.998733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.998877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.998917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.999091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.999305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.999526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.999690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:34.999868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:34.999981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.000152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.000318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.000475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.000715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.000915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.000944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.001113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.001297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.001474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.001665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.001876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.001997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.002025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.002231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.002254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.002426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.002454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.002592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.002620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.002790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.002815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.002977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.003112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.003312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.003544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.003689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.003896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.003936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.004092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.004120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.004266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.004294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.004510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.004533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.004718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.004752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.004900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.004925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.005051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.005074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.940 [2024-07-15 23:28:35.005245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.940 [2024-07-15 23:28:35.005273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.940 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.005409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.005437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.005625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.005648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.005871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.005900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.006050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.006078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.006263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.006286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.006468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.006510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.006654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.006683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.006828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.006853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.007913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.007938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.008122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.008151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.008322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.008350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.008517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.008541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.008663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.008704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.008837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.008876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.009042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.009066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.009223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.009246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.009464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.009495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.009691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.009719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.009864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.009888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.010038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.010066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.010217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.010256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.010424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.010447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.010670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.010699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.010841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.010866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.011913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.011952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.012109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.941 [2024-07-15 23:28:35.012151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.941 qpair failed and we were unable to recover it. 00:25:19.941 [2024-07-15 23:28:35.012271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.012299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.012501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.012524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.012652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.012681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.012820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.012848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.013096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.013119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.013381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.013410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.013559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.013587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.013761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.013786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.013947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.013974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.014107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.014136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.014325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.014348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.014498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.014526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.014753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.014796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.014945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.014969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.015130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.015308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.015511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.015670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.015804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.015984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.016009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.016162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.016190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.016327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.016355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.016577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.016600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.016762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.016791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.017921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.017963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.018104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.018133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.018364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.018387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.018567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.018594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.018810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.018838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.018984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.019213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.019354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.019486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.019699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.019939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.019964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.020139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.942 [2024-07-15 23:28:35.020167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.942 qpair failed and we were unable to recover it. 00:25:19.942 [2024-07-15 23:28:35.020320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.020359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.020466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.020494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.020666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.020703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.020872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.020901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.021111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.021140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.021268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.021291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.021451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.021489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.021714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.021749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.021900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.021925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.022965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.022989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.023184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.023213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.023402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.023430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.023551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.023589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.023814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.023843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.023990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.024155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.024335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.024507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.024651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.024879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.024904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.025965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.025989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.026174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.026202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.026411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.026434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.026603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.026632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.026813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.026841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.026980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.027141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.027315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.027482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.027696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.027885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.943 [2024-07-15 23:28:35.027913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.943 qpair failed and we were unable to recover it. 00:25:19.943 [2024-07-15 23:28:35.028089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.028275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.028436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.028610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.028800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.028950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.028974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.029121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.029144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.029415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.029443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.029607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.029635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.029780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.029804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.029934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.029957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.030139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.030167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.030371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.030394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.030556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.030598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.030804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.030832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.030984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.031008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.031155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.031197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.031369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.031398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.031558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.031581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.031747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.031772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.032898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.032939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.033110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.033324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.033347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.033563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.033703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.033731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.033864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.033889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.034105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.034305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.034441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.034618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.034851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.034992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.035016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.035197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.035225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.035367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.035399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.035613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.035636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.944 qpair failed and we were unable to recover it. 00:25:19.944 [2024-07-15 23:28:35.035768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.944 [2024-07-15 23:28:35.035797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.036930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.036970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.037233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.037261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.037419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.037442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.037565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.037589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.037787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.037816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.037971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.037996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.038216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.038254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.038398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.038426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.038551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.038575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.038742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.038767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.038947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.038975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.039088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.039113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.039291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.039338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.039491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.039520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.039707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.039735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.039898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.039922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.040959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.040982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.041975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.041999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.042158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.945 [2024-07-15 23:28:35.042186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.945 qpair failed and we were unable to recover it. 00:25:19.945 [2024-07-15 23:28:35.042363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.042386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.042540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.042569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.042711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.042756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.042896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.042940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.043118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.043147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.043293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.043321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.043545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.043568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.043716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.043751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.043880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.043905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.044029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.044053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.044228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.044256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.044391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.044419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.044529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.044553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.044716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.044744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.045269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.045300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.045481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.045505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.045677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.045702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.045860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.046934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.047143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.047167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.047350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.047378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.047521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.047549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.047726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.047756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.047904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.047932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.048057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.048085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.048317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.048341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.048533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.048562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.048733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.048769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.048925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.048950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.049092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.049134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.049310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.049559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.049583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.049713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.049751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.049886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.049912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.946 [2024-07-15 23:28:35.050083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.946 [2024-07-15 23:28:35.050122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.946 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.050338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.050367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.050521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.050549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.050700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.050754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.050914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.050946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.051968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.051993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.052232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.052259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.052426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.052449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.052631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.052659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.052786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.052814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.052960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.052985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.053215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.053245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.053410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.053438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.053665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.053688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.053856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.053885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.054972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.054997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.055234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.055261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.055428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.055450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.055610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.055633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.055797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.055824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.055961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.055986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.056216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.056244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.056416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.056444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.056628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.056650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.056837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.056865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.056972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.057136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.057380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.057520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.057678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.057867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.057910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.947 qpair failed and we were unable to recover it. 00:25:19.947 [2024-07-15 23:28:35.058054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.947 [2024-07-15 23:28:35.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.058277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.058300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.058495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.058523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.058699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.058731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.058852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.058878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.059013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.059053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.059220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.059248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.059427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.059672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.059699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.059870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.059898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.060966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.060991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.061111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.061155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.061279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.061303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.061499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.061527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.061689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.061712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.061883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.061911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.062964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.062990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.063177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.063205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.063422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.063456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.063630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.063658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.063856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.063882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.063996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.064182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.064351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.064504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.064745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.064901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.064930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.065056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.065094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.065254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.065296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.065465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.065493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.065622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.948 [2024-07-15 23:28:35.065660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.948 qpair failed and we were unable to recover it. 00:25:19.948 [2024-07-15 23:28:35.065794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.065820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.065974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.066161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.066359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.066492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.066758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.066898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.066926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.067092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.067120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.067314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.067337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.067516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.067545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.067753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.067782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.067939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.067964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.068114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.068157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.068373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.068401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.068560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.068588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.068780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.068806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.068943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.068968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.069167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.069190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.069366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.069394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.069570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.069598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.069794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.069820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.069957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.069985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.070195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.070224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.070382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.070405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.070579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.070607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.070767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.070796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.070911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.070937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.071059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.071097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.071236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.071264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.071392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.071416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.071659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.071686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.071846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.071874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.072045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.072067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.072272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.072300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.949 [2024-07-15 23:28:35.072439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.949 [2024-07-15 23:28:35.072467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.949 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.072652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.072674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.072815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.072857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.073948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.073976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.074170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.074198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.074354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.074377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.074594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.074622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.074775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.074816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.074952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.074977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.075147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.075170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.075357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.075385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.075584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.075607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.075795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.075821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.075981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.076217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.076405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.076589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.076787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.076969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.076997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.077133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.077160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.077315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.077353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.077577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.077606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.077754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.077782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.077909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.077935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.078056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.078081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.078248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.078275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.078525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.078548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.078706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.078734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.078891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.078919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.079096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.079120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.079320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.079348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.079525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.079557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.079717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.079753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.079891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.079915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.080072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.080100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.080300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.950 [2024-07-15 23:28:35.080323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.950 qpair failed and we were unable to recover it. 00:25:19.950 [2024-07-15 23:28:35.080479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.080507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.080764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.080922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.080946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.081113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.081151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.081338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.081366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.081538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.081561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.081731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.081768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.081926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.081958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.082128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.082151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.082341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.082369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.082560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.082588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.082763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.082816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.082978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.083133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.083312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.083514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.083702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.083908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.083933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.084071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.084113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.084273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.084302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.084440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.084655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.084683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.084868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.084893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.085056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.085078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.085276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.085304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.085468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.085687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.085735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.085933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.085961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.086102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.086130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.086283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.086305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.086447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.086488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.086624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.086652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.086789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.086828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.087972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.087995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.088174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.088202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.951 qpair failed and we were unable to recover it. 00:25:19.951 [2024-07-15 23:28:35.088355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.951 [2024-07-15 23:28:35.088383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.088531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.088568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.088755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.088783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.088906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.088933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.089159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.089182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.089301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.089330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.089483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.089511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.089723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.089789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.089939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.089962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.090120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.090147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.090317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.090339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.090525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.090565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.090670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.090698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.090872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.090897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.091116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.091144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.091293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.091322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.091491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.091513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.091748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.091790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.091927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.091952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.092188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.092402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.092605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.092632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.092804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.092828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.092972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.093000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.093210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.093244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.093417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.093440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.093599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.093626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.093818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.093846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.094038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.094428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.094588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.094799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.094978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.095006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.095166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.095189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.095445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.095472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.095650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.095677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.095887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.095911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.096053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.096090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.096349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.096377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.096586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.952 [2024-07-15 23:28:35.096609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.952 qpair failed and we were unable to recover it. 00:25:19.952 [2024-07-15 23:28:35.096771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.096800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.096938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.096966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.097187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.097210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.097346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.097374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.097596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.097624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.097810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.097834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.097978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.098182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.098404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.098548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.098744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.098965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.098988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.099112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.099140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.099281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.099309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.099448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.099711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.099751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.099945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.099972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.100127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.100150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.100337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.100365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.100528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.100555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.100751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.100790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.100991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.101019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.101298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.101325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.101524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.101547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.101777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.101813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.102011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.102039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.102261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.102291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.102479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.102682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.102709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.102905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.102929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.103111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.103139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.103312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.103339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.103460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.103497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.103614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.103638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.103804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.103832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.953 [2024-07-15 23:28:35.104075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.953 [2024-07-15 23:28:35.104098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.953 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.104265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.104293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.104462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.104489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.104668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.104695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.104822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.104847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.105966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.105994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.106167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.106199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.106355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.106378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.106586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.106614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.106799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.106828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.106991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.107015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.107195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.107386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.107413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.107574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.107597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.107806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.107848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.108027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.108055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.108249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.108272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.108435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.108468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.108716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.108749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.108895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.108919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.109079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.109123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.109327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.109362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.109522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.109544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.109720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.109755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.109901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.109930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.110167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.110190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.110378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.110406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.110571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.110599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.110763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.110787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.110946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.110969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.111174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.111202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.111363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.111386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.111613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.111649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.111836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.111861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.112070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.112116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.954 qpair failed and we were unable to recover it. 00:25:19.954 [2024-07-15 23:28:35.112239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.954 [2024-07-15 23:28:35.112261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.112491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.112518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.112746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.112769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.112970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.112998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.113147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.113175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.113355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.113377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.113558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.113585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.113801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.113829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.113959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.113998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.114209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.114237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.114445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.114473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.114628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.114650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.114816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.114844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.115060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.115089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.115244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.115267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.115496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.115524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.115681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.115709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.115885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.115909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.116025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.116066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.116256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.116284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.116501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.116523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.116701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.116729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.116950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.116981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.117250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.117272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.117452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.117489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.117606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.117634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.117811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.117855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.118010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.118050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.118265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.118293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.118418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.118456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.118677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.118704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.118899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.118924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.119064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.119101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.119316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.119343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.119580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.119614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.119878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.119903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.120048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.120077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.120231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.120258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.120402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.120443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.120613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.120641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.120867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.955 [2024-07-15 23:28:35.120895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.955 qpair failed and we were unable to recover it. 00:25:19.955 [2024-07-15 23:28:35.121079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.121102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.121261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.121289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.121509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.121537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.121730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.121930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.121973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.122123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.122151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.122370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.122393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.122544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.122572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.122771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.122799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.123054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.123254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.123407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.123596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.123809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.123997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.124025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.124156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.124183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.124377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.124399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.124559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.124587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.124821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.124846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.125134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.125162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.125311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.125334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.125584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.125611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.125759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.125816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.126037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.126064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.126257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.126279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.126433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.126455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.126735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.126770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.126998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.127026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.127239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.127261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.127544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.127571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.127795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.127829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.127996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.128023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.128177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.128200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.128385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.128413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.128701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.128729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.128853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.128881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.129029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.129067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.129293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.129328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.129513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.129565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.129710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.129756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.130018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.956 [2024-07-15 23:28:35.130056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.956 qpair failed and we were unable to recover it. 00:25:19.956 [2024-07-15 23:28:35.130305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.130333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.130538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.130589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.130746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.130782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.131012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.131036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.131241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.131269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.131501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.131552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.131715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.131755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.131936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.131969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.132166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.132194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.132341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.132391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.132564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.132592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.132827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.132851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.133031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.133058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.133277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.133313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.133458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.133486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.133726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.133762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.133961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.133985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.134136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.134190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.134385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.134413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.134625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.134652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.134844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.134869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.135125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.135174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.135325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.135353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.135494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.135531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.135680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.135721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.135904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.135928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.136071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.136099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.136282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.136304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.136448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.136486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.136665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.136693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.136876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.136905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.137113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.137158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.137339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.137367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.137517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.137544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.137725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.137762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.137972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.137996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.138142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.138175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.138326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.138384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.138649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.138676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.138867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.957 [2024-07-15 23:28:35.139062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.957 [2024-07-15 23:28:35.139090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.957 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.139322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.139369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.139544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.139572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.139728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.139791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.140001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.140029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.140203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.140251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.140468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.140496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.140656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.140684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.140828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.140870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.141113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.141169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.141389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.141417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.141612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.141635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.141798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.141827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.141970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.141998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.142161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.142188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.142367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.142595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.142623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.142808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.142863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.143009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.143036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.143291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.143313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.143585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.143613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.143761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.143803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.144059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.144203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.144402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.144581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.144769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.145010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.145160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.145188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.145435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.145483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.145694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.145722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.145887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.145911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.146052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.146273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.146331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.146477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.146507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.958 [2024-07-15 23:28:35.146702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.958 [2024-07-15 23:28:35.146725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.958 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.146896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.146929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.147089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.147150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.147379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.147407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.147589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.147611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.147812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.147840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.148104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.148154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.148317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.148518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.148541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.148765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.148794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.148957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.148985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.149247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.149274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.149665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.149715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.149869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.149900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.150075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.150132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.150307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.150335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.150501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.150534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.150692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.150715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.150912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.150940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.151091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.151119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.151320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.151343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.151582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.151610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.151804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.151830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.152037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.152065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.152194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.152233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.152504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.152532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.152754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.152787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.152951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.152979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.153210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.153233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.153423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.153451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.153633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.153661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.153825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.153854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.154048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.154086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.154287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.154315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.154502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.154552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.154703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.154730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.154911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.154946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.155224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.155252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.155481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.155530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.155701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.155934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.959 [2024-07-15 23:28:35.155959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.959 qpair failed and we were unable to recover it. 00:25:19.959 [2024-07-15 23:28:35.156141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.156173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.156390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.156439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.156676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.156704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.156828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.156853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.157043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.157066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.157251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.157309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.157487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.157515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.157673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.157696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.157879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.157909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.158120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.158170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.158387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.158415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.158630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.158653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.158803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.158832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.158982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.159009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.159203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.159231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.159423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.159682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.159710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.159935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.159961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.160135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.160162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.160410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.160433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.160631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.160659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.160868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.160919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.161099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.161126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.161250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.161287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.161453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.161496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.161637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.161666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.161850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.161878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.162048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.162072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.162185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.162208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.960 [2024-07-15 23:28:35.162375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.960 [2024-07-15 23:28:35.162403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.960 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.162515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.162543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.162770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.162811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.163032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.163060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.163196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.163251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.163457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.163485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.163604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.163642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.163848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.163876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.164065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.164122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.164310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.164338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.164624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.164646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.164830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.164863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.165073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.165123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.165292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.165319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.165486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.165512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.165787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.165816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.166072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.166124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.166346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.166374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.166545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.166576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.166729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.166766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.166963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.167002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.167183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.167374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.167397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.167560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.167588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.167732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.167798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.167978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.168004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.168189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.168213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.168469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.168497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.168630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.168658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.168821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.168850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.169080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.169102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.169305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.169333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.169564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.169613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.169757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.169785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.170012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.170050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.170207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.170235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.170491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.170534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.170731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.961 [2024-07-15 23:28:35.170766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.961 qpair failed and we were unable to recover it. 00:25:19.961 [2024-07-15 23:28:35.170879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.170904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.171049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.171073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.171337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.171387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.171573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.171601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.171778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.171812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.171953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.171981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.172181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.172239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.172420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.172452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.172628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.172651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.172822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.172852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.173106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.173151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.173298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.173336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.173598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.173621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.173837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.173872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.174097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.174147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.174349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.174377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.174573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.174595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.174887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.174916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.175097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.175148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.175367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.175395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.175595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.175623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.175800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.175824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.176005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.176046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.176242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.176270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.176393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.176430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.176585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.176626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.176800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.176860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.177011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.177047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.177227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.177249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.962 [2024-07-15 23:28:35.177463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.962 [2024-07-15 23:28:35.177490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.962 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.177715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.177750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.177905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.177932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.178119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.178481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.178681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.178853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.178991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.179030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.179204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.179232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.179404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.179431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.179639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.179662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.179869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.179898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.180087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.180137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.180292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.180320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.180545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.180568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.180759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.180801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.181002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.181042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.181203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.181231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.181412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.181434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.181596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.181624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.181839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.182029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.182057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.963 qpair failed and we were unable to recover it. 00:25:19.963 [2024-07-15 23:28:35.182244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.963 [2024-07-15 23:28:35.182267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.182453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.182485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.182638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.182666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.182849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.182875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.183035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.183059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.183265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.183293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.183477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.183530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.183776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.183804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.183976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.183999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.184239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.184276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.184420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.184470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.184647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.184674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.184833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.184858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.184987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.185025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.185252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.185301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.185460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.185488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.185736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.185765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.185944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.185972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.186206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.186256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.186399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.186427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.186685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.186708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.186878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.186906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.187124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.187171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.187417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.187445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.187597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.187619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.187781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.187809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.188076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.188125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.188234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.188261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.188453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.188491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.964 qpair failed and we were unable to recover it. 00:25:19.964 [2024-07-15 23:28:35.188666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.964 [2024-07-15 23:28:35.188694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.188870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.188898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.189028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.189057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.189223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.189259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.189468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.189496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.189628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.189656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.189895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.189923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.190093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.190116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.190281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.190308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.190493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.190520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.190702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.190730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.190885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.190909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.191114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.191315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.191363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.191527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.191554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.191724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.191782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.191970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.192004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.192193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.192240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.192413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.192440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.192603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.192634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.192944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.193160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.193470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.193498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.193707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.193735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.193977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.194000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.194129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.194185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.194442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.194470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.194745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.194769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.194943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.194970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.965 qpair failed and we were unable to recover it. 00:25:19.965 [2024-07-15 23:28:35.195189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.965 [2024-07-15 23:28:35.195248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.195557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.195584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.195814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.195839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.195966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.195994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.196269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.196317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.196530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.196557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.196783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.196807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.196991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.197019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.197189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.197243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.197487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.197515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.197717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.197752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.197942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.197965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.198107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.198135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.198341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.198369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.198573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.198595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.198756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.198785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.198922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.198950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.199075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.199103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.199272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.199309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.199574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.199601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.199799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.199827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.199999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.200027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.200180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.200203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.200411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.200453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.200633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.200660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.200856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.200882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.201111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.201134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.201299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.201327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.201497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.201525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.201733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.201767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.966 [2024-07-15 23:28:35.201947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.966 [2024-07-15 23:28:35.201972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.966 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.202097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.202120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.202423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.202478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.202660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.202688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.202844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.202868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.202985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.203165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.203342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.203512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.203773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.203953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.203981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.204207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.204235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.204406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.204429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.204644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.204672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.204826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.204854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.205009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.205037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.205173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.205221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.205473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.205523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.205665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.205693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.205853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.205879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.206067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.206106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.206286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.206314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.206515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.206543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.206709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.206742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.206878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.206904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.207071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.207465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.207657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.207875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.207992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.208020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.208190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.208217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.208390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.208415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.208588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.208621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.208747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.208775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.967 [2024-07-15 23:28:35.208990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.967 [2024-07-15 23:28:35.209018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.967 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.209181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.209206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.209372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.209412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.209607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.209635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.209778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.209806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.209984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.210162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.210368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.210528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.210729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.210906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.210933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.211129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.211181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.211353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.211381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.211535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.211559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.211811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.211840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.211955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.211983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.212211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.212239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.212403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.212427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.212594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.212621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.212767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.212808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.213058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.213085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.213267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.213291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.213441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.213468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.213701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.213729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.213899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.213927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.214050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.214075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.214327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.214354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.214537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.214606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.214806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.214835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.214958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.214983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.215134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.215159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.215324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.215352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.215501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.215529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.968 [2024-07-15 23:28:35.215709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.968 [2024-07-15 23:28:35.215733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.968 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.215930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.215958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.216111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.216139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.216281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.216309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.216467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.216492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.216652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.216687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.216849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.216878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.217110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.217138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.217321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.217345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.217541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.217570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.217720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.217757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.217882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.217909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.218088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.218128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.218275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.218303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.218576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.218631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.218808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.218837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.219014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.219054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.219268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.219296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.219447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.219505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.219720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.219765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.219909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.219934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.220079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.220121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.220224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.220252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.220408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.220436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.220589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.220617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.220869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.220895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.221079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.221121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.221261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.221289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.221410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.221435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.221572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.221606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.221841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.221867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.222929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.222955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.223170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.223198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.223371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.223426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.223562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.223594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.969 [2024-07-15 23:28:35.223758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-15 23:28:35.223786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.969 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.223946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.223975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.224154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.224182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.224345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.224373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.224524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.224549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.224713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.224762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.224943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.224971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.225119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.225146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.225275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.225300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.225503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.225544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.225664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.225692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.225856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.225884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.226060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.226432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.226653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.226856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.226985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.227028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.227220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.227272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.227418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.227457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.227705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.227730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.227912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.227940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.228078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.228135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.228376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.228404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.228572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.228600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.228875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.228901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.229094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.229143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.229266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.229302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.229514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.229539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.229685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.229715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.229857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.229884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.230068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.230099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.230255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.230285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.230531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.230560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.230687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.230715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.230917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.230946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.231124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.231150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.231303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.231331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.231467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.231495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.231781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.231811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.970 [2024-07-15 23:28:35.231969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-15 23:28:35.231995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.970 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.232128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.232155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.232314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.232341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.232547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.232586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.232750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.232776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.232888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.232929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.233145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.233173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.233322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.233351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.233528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.971 [2024-07-15 23:28:35.233553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:19.971 qpair failed and we were unable to recover it. 00:25:19.971 [2024-07-15 23:28:35.233662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.233703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.233927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.233956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.234112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.234140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.234337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.234362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.234517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.234544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.234688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.234718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.234946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.234976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.235088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.235113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.235236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.235261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.235412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.235441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.235696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.235727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.235964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.235990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.236152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.236180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.236408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.236459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.236637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.236666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.236848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.236874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.236981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.237007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.237225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.237283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.237441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.237469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.237637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.237665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.237883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.237917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.238076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.238134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.238309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.238338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.238479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.238508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.238661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.238704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.238885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.238911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.239036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.239064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.239289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.239316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.239500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.239528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.239666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.239819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.239848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.240057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.240236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.240531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.240705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.240860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.240992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.241017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.250 qpair failed and we were unable to recover it. 00:25:20.250 [2024-07-15 23:28:35.241174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.250 [2024-07-15 23:28:35.241201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.241348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.241376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.241518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.241543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.241722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.241763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.241895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.241932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.242128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.242156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.242336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.242361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.242474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.242499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.242702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.242730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.242878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.242906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.243074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.243100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.243270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.243298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.243454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.243513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.243771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.243927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.243953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.244095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.244120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.244276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.244304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.244442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.244469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.244707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.244732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.244895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.244923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.245140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.245168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.245345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.245374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.245553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.245581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.245753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.245795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.245896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.245922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.246075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.246103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.246339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.246369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.246498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.246526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.246747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.246776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.246923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.246951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.247122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.247147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.247304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.247332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.247464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.247706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.247734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.247986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.248214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.248430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.248647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.248792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.248962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.248987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.251 [2024-07-15 23:28:35.249186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.251 [2024-07-15 23:28:35.249230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.251 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.249411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.249439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.249637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.249662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.249809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.249837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.249997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.250024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.250190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.250218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.250383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.250409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.250617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.250644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.250913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.250942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.251095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.251123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.251350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.251374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.251523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.251550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.251694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.251722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.251881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.251910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.252160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.252185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.252329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.252357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.252556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.252698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.252726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.252965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.252990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.253185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.253213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.253373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.253424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.253581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.253609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.253752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.253778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.253918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.253943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.254105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.254165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.254317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.254344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.254484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.254513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.254649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.254690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.254850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.254876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.255087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.255115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.255272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.255297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.255438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.255481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.255691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.255719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.255892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.255921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.256078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.256103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.256304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.256332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.256461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.256512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.256710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.256744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.256891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.256916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.257055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.257081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.257291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.257341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.252 qpair failed and we were unable to recover it. 00:25:20.252 [2024-07-15 23:28:35.257461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.252 [2024-07-15 23:28:35.257489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.257637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.257662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.257799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.257842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.258065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.258270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.258436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.258851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.258992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.259152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.259306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.259603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.259774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.259906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.259932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.260077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.260102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.260286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.260314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.260497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.260525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.260704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.260729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.260912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.260941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.261104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.261154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.261337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.261365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.261525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.261550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.261688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.261730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.261947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.261982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.262159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.262187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.262330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.262360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.262515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.262556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.262666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.262695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.262855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.262881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.263040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.263065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.263219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.263247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.263411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.263470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.263645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.263673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.263827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.263860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.264068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.264245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.264458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.264642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.264832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.264998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.265036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.265153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.253 [2024-07-15 23:28:35.265181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.253 qpair failed and we were unable to recover it. 00:25:20.253 [2024-07-15 23:28:35.265320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.265346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.265532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.265560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.265702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.265729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.265864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.265892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.266942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.267094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.267121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.267343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.267371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.267503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.267528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.267652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.267677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.267901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.267927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.268079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.268107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.268360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.268385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.268540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.268569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.268753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.268782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.268938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.268963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.269166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.269319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.269503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.269651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.269823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.269989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.270226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.270398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.270626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.270966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.270994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.271227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.254 [2024-07-15 23:28:35.271255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.254 qpair failed and we were unable to recover it. 00:25:20.254 [2024-07-15 23:28:35.271395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.271420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.271532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.271558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.271695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.271723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.271942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.271970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.272103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.272274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.272431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.272610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.272803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.272986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.273127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.273326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.273503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.273662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.273891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.273920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.274060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.274087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.274320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.274345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.274497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.274525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.274654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.274682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.274832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.274860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.275904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.275929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.276133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.276185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.276345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.276373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.276579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.276603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.276744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.276788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.277010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.277069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.277241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.277269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.277444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.277473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.277629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.277657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.277891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.277916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.278069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.278097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.278247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.278272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.278407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.278448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.278656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.278685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.278841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.278869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.279071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.255 [2024-07-15 23:28:35.279096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.255 qpair failed and we were unable to recover it. 00:25:20.255 [2024-07-15 23:28:35.279285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.279321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.279569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.279597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.279716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.279752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.279872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.279897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.280060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.280285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.280331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.280478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.280506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.280683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.280717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.280978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.281199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.281394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.281549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.281793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.281937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.281965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.282152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.282180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.282377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.282402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.282562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.282590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.282721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.282907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.282935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.283097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.283122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.283420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.283448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.283603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.283631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.283840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.283872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.284020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.284045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.284266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.284293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.284451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.284479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.284629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.284657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.284849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.284875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.285048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.285076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.285268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.285296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.285507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.285555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.285689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.285735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.285931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.285956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.286112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.286140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.286341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.286369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.286517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.286543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.286688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.286729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.286898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.286923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.287051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.287079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.287249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.287275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.256 qpair failed and we were unable to recover it. 00:25:20.256 [2024-07-15 23:28:35.287449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.256 [2024-07-15 23:28:35.287477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.287663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.287688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.287847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.287873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.288962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.288990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.289184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.289211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.289336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.289363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.289513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.289538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.289658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.289682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.289822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.290966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.290992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.291169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.291210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.291346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.291374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.291518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.291696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.291736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.291940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.291977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.292158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.292186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.292324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.292352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.292521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.292546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.292693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.292735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.292887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.292915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.293024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.293052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.293200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.293228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.293363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.293389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.293573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.293601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.293774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.293816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.294057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.294421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.294654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.294859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.257 [2024-07-15 23:28:35.294974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.257 [2024-07-15 23:28:35.295014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.257 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.295148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.295176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.295384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.295412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.295573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.295598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.295777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.295805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.295956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.295984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.296133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.296161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.296377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.296401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.296523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.296551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.296707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.296751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.296924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.296952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.297093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.297133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.297236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.297260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.297425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.297453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.297664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.297692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.297856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.297882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.298951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.298978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.299093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.299121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.299289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.299314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.299476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.299504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.299654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.299682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.299797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.300017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.300056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.300450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.300499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.300647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.300875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.300900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.301066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.301109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.301277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.301305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.301441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.301469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.301620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.258 [2024-07-15 23:28:35.301648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.258 qpair failed and we were unable to recover it. 00:25:20.258 [2024-07-15 23:28:35.301789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.301814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.301924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.301949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.302178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.302206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.302386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.302413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.302573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.302597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.302846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.302874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.302999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.303210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.303366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.303567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.303726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.303906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.303934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.304913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.304937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.305102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.305130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.305243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.305270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.305415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.305455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.305594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.305617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.305789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93c80 is same with the state(5) to be set 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Read completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 Write completed with error (sct=0, sc=8) 00:25:20.259 starting I/O failed 00:25:20.259 [2024-07-15 23:28:35.306183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.259 [2024-07-15 23:28:35.306391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.306429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.306586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.306629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.306762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.306788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.306910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.306949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.307170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.307194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.307383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.307413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.307556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.307584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.259 qpair failed and we were unable to recover it. 00:25:20.259 [2024-07-15 23:28:35.307747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.259 [2024-07-15 23:28:35.307794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.307927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.307952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.308111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.308312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.308502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.308689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.308867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.308988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.309204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.309400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.309608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.309808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.309965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.309991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.310227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.310286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.310482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.310530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.310657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.310685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.310855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.310881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.311965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.311990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.312158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.312182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.312326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.312353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.312514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.312542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.312790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.312816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.312934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.312959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.313138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.313193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.313344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.313372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.313515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.313543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.313764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.313804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.313956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.313983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.314165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.314363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.314404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.314581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.314630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.314817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.314988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.315015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.315177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.315201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.315386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.315410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.260 [2024-07-15 23:28:35.315621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.260 [2024-07-15 23:28:35.315646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.260 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.315764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.315789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.316045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.316232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.316420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.316617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.316850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.316974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.317003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.317153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.317182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.317328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.317357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.317601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.317651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.317867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.317893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.318934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.318959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.319164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.319207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.319407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.319450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.319597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.319639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.319810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.319835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.320044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.320260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.320301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.320444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.320497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.320638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.320663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.320827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.320855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.321940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.321982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.322172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.322194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.322375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.322402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.322591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.322624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.322792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.322821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.323032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.323075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.323225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.323276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.323452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.323476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.323614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.261 [2024-07-15 23:28:35.323642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.261 qpair failed and we were unable to recover it. 00:25:20.261 [2024-07-15 23:28:35.323821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.323850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.324943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.324968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.325085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.325125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.325309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.325333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.325502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.325527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.325726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.325770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.325927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.325969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.326144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.326172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.326318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.326356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.326522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.326546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.326652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.326676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.326859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.326902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.327036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.327064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.327245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.327286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.327475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.327507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.327671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.327695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.327946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.327988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.328165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.328207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.328387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.328429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.328593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.328632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.328833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.328881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.329036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.329078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.329293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.329321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.329484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.329525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.329746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.329773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.329879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.329906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.330071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.330098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.330242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.330266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.330466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.330490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.330687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.330711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.330838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.330866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.331067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.331110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.331242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.331275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.331472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.331515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.262 qpair failed and we were unable to recover it. 00:25:20.262 [2024-07-15 23:28:35.331701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.262 [2024-07-15 23:28:35.331728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.263 qpair failed and we were unable to recover it. 00:25:20.263 [2024-07-15 23:28:35.331939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.263 [2024-07-15 23:28:35.331993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.263 qpair failed and we were unable to recover it. 00:25:20.263 [2024-07-15 23:28:35.332168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.263 [2024-07-15 23:28:35.332208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.332341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.332369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.332588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.332612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.332732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.332762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.332921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.332966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.333166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.333194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.333351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.333394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.333570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.333595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.333733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.333777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.333888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.333931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.334149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.334196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.334369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.334411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.334617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.334641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.334755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.334781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.334986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.335014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.335176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.335204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.335387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.335427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.335575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.335598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.335781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.335824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.336064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.336273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.336441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.336653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.336860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.336978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.337021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.337262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.337303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.337478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.337502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.337651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.337676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.337794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.337822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.337991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.338173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.338361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.338576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.338772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.338946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.338997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.339159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.339200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.339337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.339361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.339532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.339556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.339712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.339743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.339920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.339962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.340145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.264 [2024-07-15 23:28:35.340193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.264 qpair failed and we were unable to recover it. 00:25:20.264 [2024-07-15 23:28:35.340331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.340372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.340495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.340518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.340784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.340813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.340974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.341016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.341205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.341250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.341429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.341474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.341643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.341677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.341817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.341857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.342037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.342080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.342278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.342319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.342505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.342545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.342715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.342755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.342924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.342966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.343121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.343162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.343330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.343370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.343499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.343541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.343724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.343755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.343903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.343946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.344955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.344979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.345153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.345177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.345320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.345344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.345482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.345506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.345672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.345695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.345912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.345937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.346108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.346148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.346319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.346360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.346551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.346576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.346700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.346894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.346944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.347099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.347145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.347317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.347358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.347518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.347541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.347666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.347694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.347851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.347894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.348012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.348053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.265 [2024-07-15 23:28:35.348223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.265 [2024-07-15 23:28:35.348255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.265 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.348373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.348414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.348571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.348595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.348711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.348759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.348942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.348990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.349136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.349159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.349342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.349380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.349509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.349531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.349654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.349678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.349819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.349844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.350060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.350083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.350255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.350278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.350443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.350477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.350635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.350657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.350820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.350863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.351031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.351083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.351239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.351280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.351455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.351484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.351652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.351674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.351846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.351869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.352084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.352124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.352312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.352361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.352508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.352536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.352691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.352727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.352943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.352984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.353163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.353204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.353386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.353427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.353587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.353612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.353805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.353834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.354063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.354276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.354462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.354676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.354828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.354971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.355011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.355183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.355225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.355392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.355441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.355608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.355642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.355831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.355873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.356057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.356099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.356284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.356335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.266 [2024-07-15 23:28:35.356536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.266 [2024-07-15 23:28:35.356558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.266 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.356745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.356769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.356928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.356978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.357166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.357206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.357377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.357418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.357643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.357848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.357880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.358096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.358136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.358318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.358365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.358515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.358563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.358771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.358796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.358951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.358992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.359202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.359241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.359412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.359452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.359611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.359633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.359817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.359842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.359972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.359995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.360160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.360182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.360362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.360400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.360505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.360543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.360757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.360782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.360931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.360954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.361157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.361179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.361336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.361370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.361549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.361572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.361751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.361789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.361987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.362010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.362234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.362283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.362505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.362546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.362714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.362762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.362912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.362952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.363291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.363319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.363510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.363554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.363744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.363782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.363995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.364019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.364261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.364289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.364526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.364570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.364768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.364791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.364934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.364956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.365137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.365186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.365371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.267 [2024-07-15 23:28:35.365417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.267 qpair failed and we were unable to recover it. 00:25:20.267 [2024-07-15 23:28:35.365597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.365620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.365786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.365810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.365966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.366006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.366170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.366216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.366408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.366449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.366589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.366612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.366847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.366890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.367092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.367131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.367258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.367294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.367496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.367525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.367790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.367815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.368034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.368075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.368297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.368339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.368483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.368511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.368723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.368757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.368863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.368886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.369077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.369123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.369271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.369324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.369496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.369540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.369729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.369774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.369972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.369995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.370182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.370222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.370433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.370474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.370612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.370634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.370776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.370801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.370966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.371008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.371206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.371248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.371430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.371474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.371676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.371703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.371925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.371964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.372118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.372146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.372358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.372400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.372688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.372711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.372873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.372901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.373066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.268 [2024-07-15 23:28:35.373094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.268 qpair failed and we were unable to recover it. 00:25:20.268 [2024-07-15 23:28:35.373247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.373279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.373439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.373467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.373616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.373639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.373854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.373900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.374869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.374899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.375162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.375185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.375395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.375418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.375533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.375557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.375819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.375843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.376051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.376090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.376262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.376304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.376452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.376480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.376653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.376693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.376849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.376896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.377128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.377167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.377316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.377357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.377550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.377573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.377766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.377808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.378027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.378075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.378240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.378292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.378462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.378490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.378650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.378677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.378935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.378975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.379104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.379151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.379331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.379377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.379542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.379566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.379696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.379733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.379872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.379914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.380172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.380197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.380375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.380404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.380555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.380578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.380706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.380753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.380926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.380950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.381125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.381173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.381375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.381414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.269 [2024-07-15 23:28:35.381638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.269 [2024-07-15 23:28:35.381665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.269 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.381855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.381898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.382046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.382095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.382330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.382369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.382545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.382572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.382703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.382755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.382932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.382981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.383241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.383280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.383422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.383462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.383659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.383903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.383945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.384096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.384136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.384277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.384305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.384541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.384594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.384802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.384831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.385066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.385106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.385322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.385362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.385491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.385514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.385656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.385679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.385854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.385883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.386037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.386079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.386287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.386326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.386540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.386572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.386746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.386771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.386948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.386989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.387131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.387159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.387368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.387395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.387650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.387673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.387920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.387950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.388073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.388114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.388387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.388429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.388643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.388879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.388905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.389066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.389114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.389352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.389392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.389572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.389603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.389816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.389845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.390017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.390058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.390295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.390337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.390538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.390578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.390807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.390866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.270 [2024-07-15 23:28:35.391027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.270 [2024-07-15 23:28:35.391069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.270 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.391175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.391216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.391444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.391484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.391670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.391693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.391909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.391952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.392082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.392121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.392446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.392474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.392696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.392750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.392910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.392952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.393105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.393133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.393310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.393338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.393618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.393660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.393820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.393864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.394061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.394113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.394280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.394319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.394488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.394528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.394703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.394725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.394916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.394963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.395065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.395090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.395313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.395358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.395463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.395499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.395711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.395758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.395885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.395930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.396108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.396351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.396391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.396574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.396597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.396802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.396830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.397204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.397262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.397426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.397474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.397644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.397667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.397848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.397881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.398072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.398263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.398312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.398487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.398510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.398680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.398703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.398884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.398928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.399125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.399166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.399374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.399406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.399627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.399650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.399855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.399884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.400031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.400082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.271 qpair failed and we were unable to recover it. 00:25:20.271 [2024-07-15 23:28:35.400224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.271 [2024-07-15 23:28:35.400253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.400421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.400449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.400677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.400699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.400880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.400908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.401207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.401249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.401456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.401498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.401632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.401654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.401799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.401840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.401960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.401987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.402104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.402132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.402348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.402560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.402591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.402834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.402870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.403099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.403141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.403327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.403368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.403511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.403533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.403712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.403761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.403926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.403968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.404148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.404195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.404401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.404442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.404659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.404682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.404871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.404899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.405135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.405173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.405341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.405381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.405550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.405572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.405758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.405800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.405947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.405989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.406153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.406194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.406533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.406573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.406776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.406820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.407042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.407092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.407269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.407310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.407482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.407524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.407730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.407944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.272 [2024-07-15 23:28:35.407969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.272 qpair failed and we were unable to recover it. 00:25:20.272 [2024-07-15 23:28:35.408137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.408180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.408371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.408417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.408576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.408599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.408810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.408858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.409063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.409104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.409293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.409321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.409482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.409512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.409690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.409727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.410031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.410193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.410429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.410589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.410814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.410964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.411153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.411194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.411432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.411471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.411609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.411632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.411846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.411875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.412112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.412143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.412366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.412407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.412520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.412557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.412684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.412707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.412945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.412977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.413117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.413145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.413316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.413343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.413525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.413569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.413757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.413799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.413925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.413967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.414231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.414270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.414437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.414460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.414621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.414644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.414780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.414804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.414992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.415035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.415162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.415190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.415366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.415394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.415619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.415641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.415832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.415881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.416079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.416120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.416267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.416300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.416498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.416528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.273 [2024-07-15 23:28:35.416824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.273 [2024-07-15 23:28:35.416849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.273 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.417041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.417083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.417366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.417669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.417695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.417883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.417907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.418153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.418380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.418422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.418644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.418667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.418840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.418864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.419047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.419099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.419312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.419569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.419609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.419733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.419762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.420069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.420110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.420352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.420394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.420516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.420558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.420814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.420839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.421020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.421062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.421218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.421259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.421471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.421511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.421689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.421712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.421950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.421980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.422141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.422182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.422475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.422516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.422838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.422863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.423024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.423064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.423233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.423283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.423527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.423697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.423730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.423990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.424014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.424208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.424248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.424435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.424487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.424674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.424696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.424908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.424957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.425215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.425256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.425407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.425430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.425584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.425608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.425847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.425890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.426038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.426065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.426322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.426362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.426521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.426553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.274 qpair failed and we were unable to recover it. 00:25:20.274 [2024-07-15 23:28:35.426761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.274 [2024-07-15 23:28:35.426784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.426965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.427009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.427189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.427231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.427407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.427451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.427692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.427715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.427946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.427988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.428144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.428191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.428488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.428530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.428731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.428774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.428937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.428961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.429160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.429201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.429422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.429461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.429669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.429696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.429948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.429973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.430189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.430231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.430372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.430413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.430743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.430768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.430950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.430974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.431333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.431390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.431570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.431745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.431785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.432006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.432029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.432250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.432292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.432508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.432549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.432676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.432698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.432948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.432983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.433177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.433217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.433372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.433412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.433605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.433627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.433844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.433873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.434034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.434075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.434317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.434340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.434521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.434562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.434698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.434720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.434845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.434876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.435082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.435123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.435318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.435357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.435581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.435604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.435724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.435753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.435912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.435955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.436077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.436101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.275 qpair failed and we were unable to recover it. 00:25:20.275 [2024-07-15 23:28:35.436280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.275 [2024-07-15 23:28:35.436321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.436503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.436525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.436696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.436719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.436897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.436950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.437120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.437166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.437320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.437363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.437565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.437587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.437808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.437852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.438020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.438048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.438200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.438229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.438400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.438428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.438590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.438632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.438810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.438850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.439010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.439052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.439221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.439263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.439426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.439450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.439592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.439615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.439800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.439823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.440026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.440049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.440239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.440262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.440409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.440431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.440624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.440654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.440911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.440951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.441068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.441115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.441285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.441334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.441539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.441562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.441765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.441789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.442026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.442066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.442227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.442276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.442476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.442520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.442701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.442757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.442938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.442961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.443115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.443166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.443324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.443366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.443515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.443542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.276 qpair failed and we were unable to recover it. 00:25:20.276 [2024-07-15 23:28:35.443732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.276 [2024-07-15 23:28:35.443762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.443977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.444018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.444185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.444225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.444414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.444465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.444663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.444685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.444860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.444889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.445075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.445115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.445345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.445385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.445517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.445548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.445825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.445849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.446037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.446080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.446280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.446319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.446530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.446578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.446746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.446775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.446965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.447006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.447151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.447193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.447391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.447431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.447586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.447609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.447792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.447821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.448011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.448289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.448329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.448501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.448523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.448815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.448840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.449106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.449146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.449351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.449392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.449527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.449550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.449711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.449746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.449969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.449993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.450179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.450224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.450424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.450474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.450618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.450648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.450816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.450859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.451146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.451188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.451340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.451392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.451594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.451616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.451771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.451794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.452011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.452052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.452222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.452267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.452472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.452512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.452688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.452711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.277 [2024-07-15 23:28:35.452948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.277 [2024-07-15 23:28:35.452990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.277 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.453199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.453239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.453410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.453451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.453628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.453650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.453815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.453859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.454048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.454089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.454294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.454333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.454547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.454578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.454791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.454831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.454998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.455025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.455155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.455202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.455377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.455405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.455594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.455630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.455792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.455818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.455995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.456231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.456422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.456571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.456750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.456937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.456979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.457171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.457213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.457372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.457399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.457598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.457622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.457798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.457821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.457986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.458027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.458301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.458326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.458478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.458515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.458763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.458968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.459002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.459248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.459288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.459458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.459506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.459695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.459723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.459892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.459917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.460091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.460136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.460345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.460387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.460571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.460596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.460771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.460815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.460974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.461017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.461158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.461186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.461369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.461396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.461509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.461534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.278 [2024-07-15 23:28:35.461706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.278 [2024-07-15 23:28:35.461730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.278 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.461922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.461964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.462116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.462162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.462384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.462553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.462578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.462724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.462758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.462957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.463000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.463189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.463230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.463417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.463458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.463664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.463688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.463817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.463845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.464009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.464052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.464254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.464296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.464472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.464514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.464664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.464689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.464864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.464907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.465048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.465089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.465268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.465310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.465509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.465533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.465745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.465783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.465955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.465996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.466106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.466134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.466327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.466351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.466514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.466555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.466728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.466760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.466910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.467066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.467094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.467266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.467308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.467476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.467518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.467670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.467695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.467907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.467933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.468114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.468139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.468288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.468333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.468487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.468511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.468728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.468766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.468882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.468910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.469118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.469292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.469467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.469641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.469795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.279 qpair failed and we were unable to recover it. 00:25:20.279 [2024-07-15 23:28:35.469959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.279 [2024-07-15 23:28:35.470004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.470141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.470183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.470346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.470386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.470545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.470570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.470735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.470768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.470946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.470989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.471130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.471170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.471300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.471342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.471512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.471540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.471697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.471722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.471883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.472100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.472320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.472362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.472522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.472546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.472718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.472747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.472888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.472931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.473116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.473169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.473329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.473371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.473526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.473550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.473788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.473823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.473974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.474160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.474343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.474529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.474744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.474939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.474966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.475203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.475252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.475415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.475603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.475627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.475860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.475900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.476081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.476109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.476266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.476322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.476517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.476562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.476726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.476757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.476931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.476972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.477149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.477190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.477325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.477353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.477577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.477601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.477771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.280 [2024-07-15 23:28:35.477807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.280 qpair failed and we were unable to recover it. 00:25:20.280 [2024-07-15 23:28:35.477999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.478045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.478200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.478243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.478436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.478489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.478705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.478729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.478927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.479104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.479130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.479330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.479371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.479522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.479547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.479814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.479858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.479995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.480044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.480195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.480247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.480438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.480462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.480599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.480631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.480835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.480861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.481014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.481054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.481235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.481264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.481525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.481550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.481695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.481718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.481956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.482006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.482149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.482178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.482372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.482413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.482572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.482597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.482731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.482777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.483045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.483233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.483427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.483565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.483761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.483960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.484173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.484377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.484544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.484734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.484933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.484975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.485105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.485145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.485393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.485434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.281 qpair failed and we were unable to recover it. 00:25:20.281 [2024-07-15 23:28:35.485647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.281 [2024-07-15 23:28:35.485672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.485836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.485862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.486003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.486032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.486200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.486416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.486463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.486612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.486637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.486852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.486904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.487060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.487102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.487267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.487318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.487487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.487511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.487643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.487683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.487960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.488001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.488206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.488248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.488453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.488501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.488664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.488689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.488833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.488862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.489140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.489180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.489323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.489366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.489509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.489534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.489717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.489750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.489875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.489918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.490096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.490121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.490319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.490344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.490509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.490533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.490693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.490910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.490958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.491095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.491136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.491310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.491353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.491523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.491548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.491755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.491781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.491968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.492187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.492333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.492537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.492752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.492915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.493122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.493167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.493280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.493320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.493483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.493511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.493769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.493795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.493999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.494040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.282 qpair failed and we were unable to recover it. 00:25:20.282 [2024-07-15 23:28:35.494202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.282 [2024-07-15 23:28:35.494244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.494391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.494433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.494598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.494637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.494805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.494833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.495059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.495102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.495296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.495337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.495533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.495573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.495712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.495736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.495993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.496035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.496227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.496252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.496475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.496515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.496718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.496749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.496890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.496916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.497056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.497084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.497228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.497271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.497427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.497469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.497644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.497669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.497824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.497853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.498042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.498085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.498221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.498249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.498478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.498506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.498666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.498701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.498868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.498912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.499066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.499118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.499278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.499320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.499431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.499457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.499681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.499706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.499836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.499861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.500054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.500079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.500241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.500280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.500436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.500460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.500602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.500627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.500786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.500812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.501018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.501057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.501275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.501298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.501517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.501572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.501793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.501836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.501977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.502029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.502286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.502327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.502503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.502528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.502691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.283 [2024-07-15 23:28:35.502716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.283 qpair failed and we were unable to recover it. 00:25:20.283 [2024-07-15 23:28:35.502937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.502980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.503184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.503235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.503407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.503448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.503606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.503836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.503864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.504958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.504982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.505124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.505165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.505286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.505329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.505543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.505568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.505723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.505764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.505985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.506010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.506159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.506183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.506343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.506384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.506580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.506605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.506763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.506812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.506990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.507227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.507441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.507596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.507760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.507958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.507986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.508970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.508999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.509169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.509196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.509420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.509476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.509584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.509608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.509803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.509846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.510047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.510073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.510207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.510248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.510464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.510491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a18000b90 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.510630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.510656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.510846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.284 [2024-07-15 23:28:35.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.284 qpair failed and we were unable to recover it. 00:25:20.284 [2024-07-15 23:28:35.511052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.511262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.511424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.511560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.511808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.511938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.511963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.512106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.512133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.512389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.512417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.512597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.512624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.512786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.512811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.512950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.512975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.513140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.513182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.513324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.513351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.513454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.513479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.513660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.513687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.513819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.513845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.514032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.514056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.514192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.514220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.514412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.514439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.514601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.514863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.514888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.515109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.515137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.515336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.515387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.515530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.515558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.515755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.515807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.515979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.516017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.516158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.516350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.516377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.516533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.516560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.516811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.516836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.516980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.517020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.517160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.517196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.517372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.517413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.517582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.517609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.517746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.285 [2024-07-15 23:28:35.517789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.285 qpair failed and we were unable to recover it. 00:25:20.285 [2024-07-15 23:28:35.517936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.517961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.518097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.518124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.518384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.518412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.518580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.518608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.518750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.518792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.518989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.519187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.519375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.519402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.519594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.519622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.519767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.519813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.519990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.520857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.520995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.521034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.521306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.521444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.521471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.521637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.521665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.521848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.521874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.522050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.522074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.522448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.522498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.522655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.522843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.522867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.522979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.523003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.523246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.523277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.523441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.523478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.523680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.523707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.523905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.523941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.524126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.524154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.524325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.524352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.524630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.524658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.524841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.524866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.525038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.525223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.525407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.525646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.525826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.525994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.526045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.526231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.526266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.286 qpair failed and we were unable to recover it. 00:25:20.286 [2024-07-15 23:28:35.526520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.286 [2024-07-15 23:28:35.526568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.526701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.526735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.526917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.526940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.527098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.527120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.527330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.527357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.527542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.527570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.527730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.527773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.527970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.527994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.528276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.528303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.528549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.528595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.528778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.528824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.528991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.529208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.529430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.529563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.529769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.529970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.529995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.530191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.530218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.530416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.530438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.530604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.530631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.530820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.530848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.531070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.531107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.531268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.531295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.531459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.531486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.531648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.531670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.531896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.531923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.532080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.532108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.532297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.532320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.532543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.532571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.532712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.532757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.532908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.532931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.533075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.533111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.533278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.533305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.533524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.533546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.533696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.533723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.533884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.533912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.534082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.534104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.534257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.534298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.534447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.534474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.534595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.534649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.534836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.534872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.287 [2024-07-15 23:28:35.535014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.287 [2024-07-15 23:28:35.535037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.287 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.535208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.535230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.535408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.535435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.535620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.535647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.535790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.535813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.536965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.536989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.537203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.537231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.537401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.537423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.537615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.537642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.537756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.537784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.537946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.537969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.538107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.538145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.538311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.538339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.538526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.538548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.538685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.538712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.538909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.538932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.539078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.539101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.539284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.539324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.539519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.539546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.539706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.539729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.539934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.539962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.540072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.540099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.540260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.540285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.540422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.540463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.540667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.540695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.540878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.540903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.541042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.541075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.541217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.541245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.541391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.541414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.541644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.541671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.541850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.541879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.542032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.542057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.542290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.542317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.542536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.542567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.542687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.542725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.288 [2024-07-15 23:28:35.542880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.288 [2024-07-15 23:28:35.542920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.288 qpair failed and we were unable to recover it. 00:25:20.289 [2024-07-15 23:28:35.543072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.289 [2024-07-15 23:28:35.543105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.289 qpair failed and we were unable to recover it. 00:25:20.289 [2024-07-15 23:28:35.543279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.289 [2024-07-15 23:28:35.543302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.289 qpair failed and we were unable to recover it. 00:25:20.289 [2024-07-15 23:28:35.543527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.289 [2024-07-15 23:28:35.543564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.289 qpair failed and we were unable to recover it. 00:25:20.289 [2024-07-15 23:28:35.543727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.289 [2024-07-15 23:28:35.543760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.289 qpair failed and we were unable to recover it. 00:25:20.289 [2024-07-15 23:28:35.543909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.289 [2024-07-15 23:28:35.543938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.289 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.544088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.544113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.544300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.544327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.544492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.544515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.544636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.544676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.544875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.544900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.545961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.545990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.546252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.546277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.546499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.546534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.546726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.546761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.546916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.546941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.547058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.547082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.547295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.547322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.547444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.547668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.547709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.547863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.547895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.548031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.548075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.548340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.602 [2024-07-15 23:28:35.548368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.602 qpair failed and we were unable to recover it. 00:25:20.602 [2024-07-15 23:28:35.548564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.548592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.548714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.548763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.548921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.548960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.549182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.549217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.549388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.549411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.549559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.549586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.549791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.549820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.549932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.550133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.550171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.550365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.550392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.550593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.550615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.550778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.550807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.551033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.551061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.551212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.551235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.551375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.551417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.551564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.551595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.551748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.551771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.552007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.552041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.552188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.552216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.552389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.552411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.552666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.552693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.552918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.552942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.553155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.553177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.553337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.553364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.553533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.553565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.553734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.553768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.553915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.553939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.554131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.554158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.603 [2024-07-15 23:28:35.554286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.603 [2024-07-15 23:28:35.554323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.603 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.554524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.554551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.554702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.554730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.554855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.554878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.555038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.555060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.555256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.555291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.555500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.555522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.555754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.555782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.555935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.555963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.556232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.556254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.556435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.556463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.556638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.556665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.556822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.556846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.557115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.557142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.557369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.557396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.557562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.557584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.557798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.557827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.558006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.558034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.558216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.558238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.558414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.558441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.558647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.558675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.558842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.558865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.559123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.559150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.559336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.559364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.559569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.559592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.559816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.559841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.560014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.560056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.560214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.560441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.560468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.560615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.560642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.560806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.560831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.561074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.561216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.561457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.561662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.604 [2024-07-15 23:28:35.561839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.604 qpair failed and we were unable to recover it. 00:25:20.604 [2024-07-15 23:28:35.561985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.562008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.562194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.562226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.562388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.562416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.562655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.562682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.562956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.562984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.563132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.563159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.563390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.563413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.563621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.563648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.563808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.563836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.563973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.563997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.564259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.564286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.564477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.564505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.564669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.564691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.564836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.564877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.565083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.565110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.565229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.565251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.565392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.565415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.565560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.565588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.565776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.565816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.566004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.566050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.566200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.566227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.566436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.566458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.566575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.566602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.566864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.566889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.567090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.567113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.567266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.567293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.567588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.567615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.567796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.567820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.568909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.568933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.569073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.569111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.569219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.569246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.569396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.569419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.569630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.569657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.569800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.569827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.570002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.570039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.605 [2024-07-15 23:28:35.570240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.605 [2024-07-15 23:28:35.570267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.605 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.570475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.570502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.570651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.570689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.570917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.570945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.571051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.571078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.571241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.571278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.571443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.571477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.571694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.571722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.571853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.571876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.572239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.572296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.572456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.572483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.572616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.572657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.572852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.572876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.573034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.573061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.573275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.573297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.573455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.573487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.573691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.573719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.573949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.573975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.574112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.574157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.574385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.574413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.574558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.574581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.574768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.574796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.575969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.575997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.576167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.576195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.576454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.576476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.576663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.576697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.576859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.576897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.577100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.577122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.577351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.577539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.577566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.577770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.577819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.578067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.578104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.578269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.578297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.578459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.578492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.578760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.578788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.578977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.579004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.606 [2024-07-15 23:28:35.579163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.606 [2024-07-15 23:28:35.579186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.606 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.579388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.579428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.579582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.579824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.579848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.580072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.580109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.580258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.580285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.580420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.580457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.580631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.580653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.580845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.580873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.581323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.581488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.581652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.581839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.581985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.582013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.582151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.582188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.582371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.582398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.582560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.582589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.582855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.582878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.583102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.583129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.583325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.583353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.583524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.583546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.583779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.583820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.583968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.583991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.584172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.584194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.584355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.584382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.584498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.584530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.584684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.584706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.584858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.584899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.585072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.585320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.585343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.585462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.585489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.585650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.585677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.585887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.585912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.607 [2024-07-15 23:28:35.586134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.607 [2024-07-15 23:28:35.586169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.607 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.586343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.586370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.586602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.586624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.586809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.586837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.587035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.587250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.587272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.587478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.587505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.587651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.587679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.587853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.587880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.588098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.588147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.588336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.588363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.588535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.588558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.588720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.588950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.588977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.589129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.589162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.589394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.589421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.589573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.589600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.589795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.589833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.590003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.590031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.590198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.590225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.590401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.590651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.590682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.590892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.590916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.591093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.591115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.591258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.591285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.591496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.591523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.591688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.591711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.591924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.591951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.592098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.592126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.592364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.592391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.592637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.592670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.592877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.592905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.593068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.593091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.593261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.593299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.593560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.593587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.593840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.593867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.593996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.594023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.594162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.594190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.594382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.594404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.594629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.594656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.594810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.594838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.608 [2024-07-15 23:28:35.595033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.608 [2024-07-15 23:28:35.595056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.608 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.595243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.595271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.595438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.595465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.595668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.595690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.595866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.595890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.596162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.596189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.596383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.596405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.596587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.596614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.596797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.596821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.596929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.596953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.597096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.597134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.597311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.597339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.597504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.597526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.597692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.597724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.597936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.597963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.598122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.598144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.598365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.598393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.598589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.598615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.598792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.598817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.598998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.599038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.599256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.599294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.599474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.599502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.599696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.599724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.600011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.600039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.600228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.600251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.600407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.600431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.600619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.600656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.600890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.600914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.601088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.601115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.601309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.601336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.601522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.601550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.601765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.601793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.601971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.601998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.602212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.602234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.602437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.602464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.602624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.602652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.602778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.602811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.602989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.603028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.603159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.603186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.603335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.603371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.603579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.603607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.609 [2024-07-15 23:28:35.603785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.609 [2024-07-15 23:28:35.603809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.609 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.603938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.603961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.604140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.604167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.604305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.604333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.604518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.604539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.604683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.604710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.604891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.604916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.605049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.605072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.605226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.605268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.605447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.605474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.605640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.605662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.605819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.605858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.606059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.606300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.606324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.606488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.606515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.606750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.606779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.606940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.606965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.607093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.607143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.607393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.607420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.607640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.607673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.607860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.607888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.608041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.608081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.608248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.608270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.608508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.608545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.608748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.608776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.608926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.608949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.609161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.609187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.609336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.609374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.609643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.609665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.609881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.609908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.610064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.610102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.610358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.610381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.610695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.610722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.610925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.610949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.611129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.611151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.611299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.611326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.611504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.611531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.611751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.611775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.612000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.612027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.612241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.612278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.610 [2024-07-15 23:28:35.612423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.610 [2024-07-15 23:28:35.612445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.610 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.612617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.612644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.612822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.612850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.613014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.613050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.613279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.613307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.613458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.613488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.613740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.613777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.614081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.614108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.614279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.614311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.614557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.614579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.614828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.614859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.615069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.615096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.615247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.615269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.615450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.615477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.615671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.615705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.615846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.615881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.616040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.616063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.616243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.616271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.616455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.616477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.616733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.616767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.616955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.616978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.617220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.617242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.617466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.617505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.617655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.617843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.617867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.618068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.618266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.618482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.618647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.618842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.618997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.619043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.619223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.619250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.619401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.619429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.619564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.619601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.619849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.619877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.620003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.611 [2024-07-15 23:28:35.620038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.611 qpair failed and we were unable to recover it. 00:25:20.611 [2024-07-15 23:28:35.620266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.620296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.620472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.620499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.620618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.620645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.620860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.620883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.621907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.621929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.622106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.622134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.622311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.622339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.622503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.622525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.622693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.622725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.622925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.622948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.623104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.623126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.623343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.623373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.623515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.623542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.623743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.623766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.623954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.623981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.624171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.624199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.624427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.624449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.624633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.624661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.624848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.624876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.624999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.625039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.625241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.625269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.625448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.625475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.625604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.625640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.625851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.625879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.626023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.626050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.626240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.626262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.626481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.626508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.626685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.626713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.626935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.626959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.627090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.627117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.627288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.627315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.627499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.627522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.627713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.627744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.627963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.627996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.628172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.628194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.628361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.628388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.612 qpair failed and we were unable to recover it. 00:25:20.612 [2024-07-15 23:28:35.628611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.612 [2024-07-15 23:28:35.628643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.628786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.628824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.629036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.629063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.629227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.629254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.629429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.629451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.629637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.629664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.629846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.629869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.630074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.630096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.630339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.630366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.630571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.630599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.630741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.630786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.630953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.630980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.631148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.631369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.631391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.631571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.631598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.631764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.631792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.631981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.632004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.632124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.632163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.632384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.632418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.632621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.632643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.632827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.632850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.633918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.633967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.634147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.634174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.634363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.634385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.634526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.634554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.634720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.634753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.634968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.634996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.635227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.635403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.635430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.635589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.635616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.635772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.635812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.636083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.636110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.636284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.636306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.636451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.636480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.636676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.636703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.636941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.613 [2024-07-15 23:28:35.636965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.613 qpair failed and we were unable to recover it. 00:25:20.613 [2024-07-15 23:28:35.637100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.637127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.637269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.637297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.637551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.637573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.637810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.637838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.637982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.638020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.638180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.638202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.638470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.638497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.638688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.638715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.638888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.638912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.639151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.639185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.639322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.639349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.639532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.639554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.639782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.639825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.639957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.639984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.640105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.640142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.640352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.640383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.640582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.640610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.640777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.640800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.640972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.641177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.641214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.641379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.641402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.641641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.641677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.641785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.641813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.641970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.641993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.642179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.642206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.642356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.642383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.642559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.642581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.642826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.642854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.643914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.643938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.644120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.644317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.644527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.644554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.644809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.644833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.645232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.645289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.645540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.645567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.645766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.614 [2024-07-15 23:28:35.645805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.614 qpair failed and we were unable to recover it. 00:25:20.614 [2024-07-15 23:28:35.645956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.645978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.646131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.646168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.646414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.646441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.646576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.646598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.646834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.646862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.647007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.647034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.647194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.647216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.647383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.647410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.647682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.647709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.647896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.647928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.648132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.648159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.648347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.648374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.648575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.648597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.648765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.648794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.648995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.649023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.649182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.649204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.649421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.649455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.649625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.649652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.649868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.649892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.650052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.650087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.650270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.650296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.650419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.650456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.650586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.650609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.650826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.650864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.651006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.651043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.651233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.651260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.651433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.651460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.651658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.651680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.651797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.651821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.652020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.652057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.652206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.652237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.652420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.652447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.652591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.652618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.652775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.652798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.653055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.653091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.615 [2024-07-15 23:28:35.653326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.615 [2024-07-15 23:28:35.653353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.615 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.653463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.653486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.653641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.653663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.653841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.653869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.654039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.654066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.654278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.654305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.654472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.654500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.654659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.654681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.654940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.654968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.655160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.655187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.655357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.655379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.655531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.655558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.655731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.655765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.655919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.655950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.656133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.656160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.656338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.656365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.656556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.656730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.656763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.656955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.656978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.657143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.657405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.657432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.657627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.657654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.657848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.657873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.658132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.658159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.658289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.658316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.658514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.658536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.658752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.658791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.659004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.659032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.659201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.659223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.659479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.659506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.659734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.659770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.659927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.659954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.660090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.660126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.660316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.660343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.660510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.660532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.660696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.660724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.660956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.660984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.661172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.661195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.661350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.661378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.661651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.661678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.661867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.661891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.616 qpair failed and we were unable to recover it. 00:25:20.616 [2024-07-15 23:28:35.662072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.616 [2024-07-15 23:28:35.662099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.662281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.662308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.662489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.662511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.662631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.662673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.662886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.662910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.663053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.663076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.663290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.663317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.663497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.663524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.663724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.663780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.663975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.664002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.664208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.664236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.664430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.664452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.664660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.664687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.664837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.664875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.665020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.665057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.665249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.665496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.665524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.665683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.665927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.665954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.666120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.666148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.666345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.666367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.666548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.666775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.666803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.666996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.667212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.667411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.667589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.667769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.667972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.667999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.668158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.668180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.668393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.668420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.668653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.668683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.668867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.668890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.669937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.669960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.670147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.670174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.670399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.670426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.617 [2024-07-15 23:28:35.670581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.617 [2024-07-15 23:28:35.670608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.617 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.670743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.670785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.670982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.671185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.671375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.671583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.671814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.671934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.671972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.672124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.672151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.672382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.672404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.672577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.672613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.672767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.672794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.672972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.672994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.673184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.673220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.673369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.673396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.673612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.673809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.673837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.673985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.674013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.674280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.674302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.674451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.674490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.674742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.674770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.674914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.674938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.675095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.675118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.675352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.675381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.675537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.675559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.675776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.675804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.675979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.676006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.676156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.676178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.676421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.676454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.676651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.676678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.676782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.676806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.676995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.677032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.677268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.677295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.677502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.677525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.677706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.677733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.677890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.677914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.678071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.678428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.678610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.618 qpair failed and we were unable to recover it. 00:25:20.618 [2024-07-15 23:28:35.678990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.618 [2024-07-15 23:28:35.679017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.679172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.679194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.679367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.679408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.679569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.679600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.679782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.680964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.680987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.681103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.681337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.681364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.681569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.681592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.681749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.681806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.681939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.681966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.682148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.682170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.682350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.682377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.682546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.682573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.682692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.682731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.682947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.682991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.683169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.683196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.683388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.683411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.683624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.683651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.683797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.683821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.684036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.684059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.684244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.684279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.684478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.684505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.684664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.684686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.684855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.684882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.685002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.685034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.685317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.685339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.685552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.685579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.685769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.685798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.685940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.685963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.686152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.686179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.686397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.686424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.686582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.686605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.686805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.686833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.686972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.686999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.687130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.687167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.619 qpair failed and we were unable to recover it. 00:25:20.619 [2024-07-15 23:28:35.687314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.619 [2024-07-15 23:28:35.687355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.687493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.687520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.687665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.687703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.687916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.687947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.688125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.688152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.688374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.688396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.688581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.688608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.688805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.688833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.689021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.689060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.689271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.689302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.689460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.689490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.689654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.689681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.689816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.689841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.690044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.690071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.690365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.690402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.690579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.690606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.690848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.690876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.691095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.691118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.691304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.691331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.691514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.691541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.691696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.691718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.691914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.691941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.692094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.692121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.692280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.692302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.692503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.692531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.692673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.692701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.692903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.692927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.693102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.693130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.693287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.693315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.693488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.693509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.693700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.693727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.693892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.694153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.694175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.694316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.694343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.694502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.694529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.620 [2024-07-15 23:28:35.694668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.620 [2024-07-15 23:28:35.694704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.620 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.694856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.694898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.695068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.695242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.695426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.695589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.695824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.695987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.696011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.696142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.696169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.696339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.696361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.696548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.696575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.696718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.696760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.696997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.697198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.697370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.697587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.697762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.697972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.697999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.698189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.698211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.698357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.698384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.698551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.698579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.698707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.698730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.698866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.699057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.699085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.699221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.699258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.699371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.699394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.699581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.699608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.699836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.699860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.700942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.700965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.701186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.701220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.621 [2024-07-15 23:28:35.701387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.621 [2024-07-15 23:28:35.701422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.621 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.701614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.701799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.701824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.702038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.702080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.702260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.702283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.702484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.702512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.702755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.702799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.703006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.703042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.703195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.703222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.703405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.703432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.703633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.703655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.703835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.703859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.704043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.704080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.704276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.704298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.704461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.704667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.704698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.704895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.704923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.705128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.705155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.705345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.705372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.705531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.705554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.705784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.705811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.705954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.705981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.706143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.706167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.706408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.706435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.706577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.706604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.706756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.706780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.706919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.622 [2024-07-15 23:28:35.706957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.622 qpair failed and we were unable to recover it. 00:25:20.622 [2024-07-15 23:28:35.707147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.707174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.707346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.707371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.707534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.707558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.707715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.707873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.707898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.708072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.708295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.708483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.708692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.708884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.708998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.709023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.709302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.709329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.709489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.709516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.709723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.709768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.709936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.709968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.710128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.710155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.710345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.710383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.710531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.710558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.710678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.710706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.710859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.710884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.623 [2024-07-15 23:28:35.711972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.623 [2024-07-15 23:28:35.711997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.623 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.712144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.712182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.712314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.712341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.712499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.712537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.712707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.712734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.712895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.712922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.713131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.713155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.713376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.713403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.713616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.713643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.713781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.713806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.713943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.713967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.714951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.714975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.715864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.715889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.624 qpair failed and we were unable to recover it. 00:25:20.624 [2024-07-15 23:28:35.716924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.624 [2024-07-15 23:28:35.716949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.717113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.717141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.717304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.717329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.717457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.717481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.717634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.717661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.717841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.717866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.718934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.718959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.719122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.719150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.719290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.719317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.719531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.719555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.719729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.719761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.719933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.719961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.720104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.720128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.720316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.720342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.720508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.720535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.720673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.720697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.720861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.720904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.721048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.721075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.721233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.625 [2024-07-15 23:28:35.721257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.625 qpair failed and we were unable to recover it. 00:25:20.625 [2024-07-15 23:28:35.721407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.721430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.721579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.721606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.721773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.721798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.721954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.721981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.722117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.722144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.722288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.722316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.722522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.722549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.722719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.722917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.722941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.723110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.723137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.723352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.723379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.723545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.723572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.723796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.723821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.723955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.723995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.724158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.724181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.724372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.724399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.724533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.724561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.724807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.724832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.724980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.725007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.725226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.725253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.725373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.725397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.725582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.725621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.725785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.626 [2024-07-15 23:28:35.725813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.626 qpair failed and we were unable to recover it. 00:25:20.626 [2024-07-15 23:28:35.725958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.725983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.726104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.726128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.726349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.726502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.726526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.726677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.726702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.726846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.726885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.727872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.728948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.728975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.729090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.729113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.729243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.729282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.729436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.729463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.729591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.729616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.729791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.729840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.730043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.730070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.730218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.730241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.730404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.730444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.730663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.627 [2024-07-15 23:28:35.730691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.627 qpair failed and we were unable to recover it. 00:25:20.627 [2024-07-15 23:28:35.730871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.730896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.731938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.731961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.732137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.732164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.732299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.732326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.732461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.732500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.732749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.732777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.732905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.732932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.733118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.733274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.733462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.733701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.733869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.733977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.734957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.734985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.735166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.735190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.735337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.735364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.735527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.735554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.628 [2024-07-15 23:28:35.735700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.628 [2024-07-15 23:28:35.735724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.628 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.735864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.735904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.736024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.736051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.736195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.736219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.736485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.736512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.736748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.736799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.736961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.736986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.737141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.737168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.737337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.737364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.737482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.737521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.737663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.737688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.737891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.737916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.738020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.738045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.738243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.738266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.738410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.738436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.738591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.738630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.738773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.738797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.739859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.739899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.740891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.740931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.741092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.741119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.741290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.741313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.741444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.741485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.741637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.741664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.629 qpair failed and we were unable to recover it. 00:25:20.629 [2024-07-15 23:28:35.741850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.629 [2024-07-15 23:28:35.741875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.741975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.742172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.742353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.742567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.742764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.742907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.742931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.743093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.743133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.743243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.743270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.743501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.743526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.743656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.743683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.743835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.743863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.744895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.744926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.745951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.745978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.746132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.746357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.746383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.746510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.746536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.746698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.746722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.746879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.746906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.747090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.747333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.747356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.747471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.747495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.747717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.747751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.747961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.747985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.748187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.748214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.748333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.748361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.748550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.748573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.748686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.748725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.748882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.748910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.749050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.749074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.749206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.749247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.630 qpair failed and we were unable to recover it. 00:25:20.630 [2024-07-15 23:28:35.749486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.630 [2024-07-15 23:28:35.749514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.749670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.749694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.749867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.749895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.750035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.750067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.750223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.750247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.750482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.750509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.750681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.750708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.750862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.750886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.751074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.751260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.751433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.751653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.751837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.751986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.752010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.752236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.752263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.752422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.752449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.752658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.752685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.752845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.752871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.753877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.753902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.754162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.754188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.754313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.754352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.754512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.754554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.754708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.754735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.754884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.754908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.755155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.755182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.755331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.755358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.755585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.755608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.755790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.755818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.755982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.631 [2024-07-15 23:28:35.756010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.631 qpair failed and we were unable to recover it. 00:25:20.631 [2024-07-15 23:28:35.756175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.756199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.756316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.756341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.756493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.756520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.756661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.756700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.756870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.756897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.757926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.757951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.758176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.758203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.758405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.758428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.758581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.758608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.758725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.758758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.758881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.758905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.759075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.759114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.759255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.759282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.759464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.759672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.759850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.759875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.760026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.760064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.760301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.760328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.760464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.760492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.760641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.760679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.760866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.760894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.761920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.761945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.762097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.762124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.762292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.762316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.762473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.762500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.762643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.762670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.762871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.762896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.763070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.763101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.763229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.763256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.763438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.763477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.763663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.763691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.632 qpair failed and we were unable to recover it. 00:25:20.632 [2024-07-15 23:28:35.763868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.632 [2024-07-15 23:28:35.763893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.764943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.764967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.765123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.765147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.765311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.765339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.765472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.765498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.765629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.765668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.765820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.765862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.766938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.766965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.767172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.767389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.767570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.767731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.767881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.767995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.768188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.768374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.768568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.768760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.768935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.769937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.769961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.770958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.770982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.633 [2024-07-15 23:28:35.771089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.633 [2024-07-15 23:28:35.771129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.633 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.771246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.771273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.771426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.771451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.771569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.771594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.771713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.771745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.771866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.771891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.772946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.772970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.773896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.773920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.774058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.774083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.774262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.774289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.774451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.774475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.774642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.774670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.774823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.774852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.775944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.776155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.776341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.776480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.776670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.776871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.776994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.777021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.777228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.777261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.777401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.777437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.777600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.777628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.777842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.634 [2024-07-15 23:28:35.777867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.634 qpair failed and we were unable to recover it. 00:25:20.634 [2024-07-15 23:28:35.778017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.778895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.778995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.779020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.779271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.779299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.779447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.779472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.779649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.779676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.779819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.779847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.779999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.780218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.780357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.780557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.780747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.780923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.780948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.781062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.781087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.781270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.781295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.781493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.781520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.781674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.781702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.781861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.781887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.782908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.782933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.783072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.783097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.783276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.783304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.783424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.783449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.784591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.784623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.784788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.784818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.784967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.784993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.785186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.785214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.785358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.635 [2024-07-15 23:28:35.785386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.635 qpair failed and we were unable to recover it. 00:25:20.635 [2024-07-15 23:28:35.785587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.785612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.785749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.785777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.785928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.785961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.786104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.786129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.786252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.786287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.786524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.786552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.786679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.786706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.786861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.786886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.787915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.788073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.788100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.788297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.788321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.788526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.788554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.788725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.788768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.788880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.788905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.789854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.789980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.790005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.790185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.790211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.790509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.790534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.790678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.790703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.790820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.790849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.790992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.791018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.791263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.791288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.791449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.791495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.791643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.791668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.791826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.791852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.792010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.792035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.792241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.792265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.792381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.636 [2024-07-15 23:28:35.792408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.636 qpair failed and we were unable to recover it. 00:25:20.636 [2024-07-15 23:28:35.792573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.792600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.792748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.792773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.792927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.792952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.793110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.793311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.793337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.793476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.793502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.793768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.793794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.793905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.793930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.794952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.794978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.795111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.795151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.795285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.795313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.795469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.795497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.795711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.795745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.795892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.795917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.796053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.796078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.796242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.796270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.796412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.796439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.796613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.796640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.796817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.796842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.797901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.797926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.798093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.798121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.798268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.798292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.798455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.798480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.798681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.798706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.798869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.798895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.799084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.799109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.799293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.799321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.799519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.799547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.799723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.799758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.637 [2024-07-15 23:28:35.799914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.637 [2024-07-15 23:28:35.799939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.637 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.800957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.800982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.801899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.801925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.802074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.802099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.802236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.802276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.802472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.802500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.802658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.802685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.802833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.802858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.803973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.803998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.804190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.804217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.804331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.804356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.804543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.804571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.804689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.804717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.804889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.804914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.805956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.805980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.806171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.806199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.806344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.806371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.806523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.806550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.806675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.806703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.806896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.807043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.807068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.807239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.807266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.638 [2024-07-15 23:28:35.807384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.638 [2024-07-15 23:28:35.807412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.638 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.807533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.807574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.807715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.807747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.807882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.807907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.808964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.808989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.809203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.809228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.809382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.809410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.809553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.809580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.809750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.809778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.809946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.809971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.810118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.810145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.810358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.810556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.810583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.810751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.810793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.810912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.810937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.811110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.811137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.811336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.811363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.811567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.811594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.811733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.811765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.811943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.811968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.812128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.812154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.812320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.812347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.812535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.812562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.812715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.812763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.812923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.812949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.813136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.813164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.813331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.813378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.813524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.813551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.813746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.813788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.813899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.813924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.814109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.814136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.814264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.814291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.814485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.814513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.814702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.814730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.814917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.814942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.815095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.815119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.815299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.639 [2024-07-15 23:28:35.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.639 qpair failed and we were unable to recover it. 00:25:20.639 [2024-07-15 23:28:35.815464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.815492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.815651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.815692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.815869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.815894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.816078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.816294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.816452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.816610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.816828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.816982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.817203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.817399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.817602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.817823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.817959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.817983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.818174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.818202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.818385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.818412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.818614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.818641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.818810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.818835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.818960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.818984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.819174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.819199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.819351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.819379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.819554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.819582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.819804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.819829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.819959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.819984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.820149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.820176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.820372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.820421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.820534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.820561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.820721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.820757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.820926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.820951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.821152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.821179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.821364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.821391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.821547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.821582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.821802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.821829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.821975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.822167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.822393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.822555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.822753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.822910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.822934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.823114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.823137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.640 qpair failed and we were unable to recover it. 00:25:20.640 [2024-07-15 23:28:35.823346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.640 [2024-07-15 23:28:35.823386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.823518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.823560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.823712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.823745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.823914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.823938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.824098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.824120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.824293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.824320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.824501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.824529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.824706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.824728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.824884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.824908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.825047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.825074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.825272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.825294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.825482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.825509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.825635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.825663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.825833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.825858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.826053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.826254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.826510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.826667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.826863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.826983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.827190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.827396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.827580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.827801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.827949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.827974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.828170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.828193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.828344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.828371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.828528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.828556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.828683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.828720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.828866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.828891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.829078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.829273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.829468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.829662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.829879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.829988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.830012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.830215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.830242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.830406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.830433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.830564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.830591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.830763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.830803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.830996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.831184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.831380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.831563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.831755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.831951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.831979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.832158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.832181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.832372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.641 [2024-07-15 23:28:35.832395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.641 qpair failed and we were unable to recover it. 00:25:20.641 [2024-07-15 23:28:35.832578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.832605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.832809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.832834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.832954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.832979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.833137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.833166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.833324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.833348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.833563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.833590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.833748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.833772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.833895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.833919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.834049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.834073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.834255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.834282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.834423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.834462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.834659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.834683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.834857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.834881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.835041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.835064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.835232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.835267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.835522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.835549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.835787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.835812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.835960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.835983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.836177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.836205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.836364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.836421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.836592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.836615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.836793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.836817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.836942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.836965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.837189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.837217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.837354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.837381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.837588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.837625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.837787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.837811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.837953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.837978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.838136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.838158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.838343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.838366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.838539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.838576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.838761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.838809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.838943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.838967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.839134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.839162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.839288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.839341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.839506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.839533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.839662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.839689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.839874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.839898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.840024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.840061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.840246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.840273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.840436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.840464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.840633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.840660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.840819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.840844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.841003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.841041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.841196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.841224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.841334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.841361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.642 qpair failed and we were unable to recover it. 00:25:20.642 [2024-07-15 23:28:35.841521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.642 [2024-07-15 23:28:35.841548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.841704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.841731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.841888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.841913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.842086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.842109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.842274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.842301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.842461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.842489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.842690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.842718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.842866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.842890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.843911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.843936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.844102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.844275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.844519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.844715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.844887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.844990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.845032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.845189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.845216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.845414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.845441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.845602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.845639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.845824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.845848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.845998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.846210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.846364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.846601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.846816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.846961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.846985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.847188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.847211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.847414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.847441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.847600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.847627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.847816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.847839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.847976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.848159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.848350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.848557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.848742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.848930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.848953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.849068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.849107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.849336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.849363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.849534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.849561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.849770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.849811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.849941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.849965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.850138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.850161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.850288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.643 [2024-07-15 23:28:35.850330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.643 qpair failed and we were unable to recover it. 00:25:20.643 [2024-07-15 23:28:35.850462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.850490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.850664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.850691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.850877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.850902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.851962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.851986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.852190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.852218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.852379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.852401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.852577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.852603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.852765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.852793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.852964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.852987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.853164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.853191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.853356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.853383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.853512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.853549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.853751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.853936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.853964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.854130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.854153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.854323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.854345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.854520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.854547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.854712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.854734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.854905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.854933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.855095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.855122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.855294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.855316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.855476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.855503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.855639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.855667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.855840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.855863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.856019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.856059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.856245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.856273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.856445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.856467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.856636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.856663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.856826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.856850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.857039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.857235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.857400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.857617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.857854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.857989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.858016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.858176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.858200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.858375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.858402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.858565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.644 [2024-07-15 23:28:35.858592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.644 qpair failed and we were unable to recover it. 00:25:20.644 [2024-07-15 23:28:35.858755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.858781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.858901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.858941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.859071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.859098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.859285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.859476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.859503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.859666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.859693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.859861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.859886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.860910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.860934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.861951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.861974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.862114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.862137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.862350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.862377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.862536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.862560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.862740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.862764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.862913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.862940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.645 qpair failed and we were unable to recover it. 00:25:20.645 [2024-07-15 23:28:35.863094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.645 [2024-07-15 23:28:35.863137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.920 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.863305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.863483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.863510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.863679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.863703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.863880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.863908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.864950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.864991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.865179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.865206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.865408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.865564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.865591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.865769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.865797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.865922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.865947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.866074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.866098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.866283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.866311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.866487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.866512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.866688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.866716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.866863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.866888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.867966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.867993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.868163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.868199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.868364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.868391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.868583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.868612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.868788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.868814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.868980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.869156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.869371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.869571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.869785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.869956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.870162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.870189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.870372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.870399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.870576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.870599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.870710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.921 [2024-07-15 23:28:35.870758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.921 qpair failed and we were unable to recover it. 00:25:20.921 [2024-07-15 23:28:35.870900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.870927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.871150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.871173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.871320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.871348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.871531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.871558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.871718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.871750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.871886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.871910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.872924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.873140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.873352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.873514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.873704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.873873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.873996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.874207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.874422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.874587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.874765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.874920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.874943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.875116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.875139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.875336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.875364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.875539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.875562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.875766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.875795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.875936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.875964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.876128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.876151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.876288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.876330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.876498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.876530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.876708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.876731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.876889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.876916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.877074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.877281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.877460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.877660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.877853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.877984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.878026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.878219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.878246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.878384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.878413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.922 qpair failed and we were unable to recover it. 00:25:20.922 [2024-07-15 23:28:35.878589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.922 [2024-07-15 23:28:35.878616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.878767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.878953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.878977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.879970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.879993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.880149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.880177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.880306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.880333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.880478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.880502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.880710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.880755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.880916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.880947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.881122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.881145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.881342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.881370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.881523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.881550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.881725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.881752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.881882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.881922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.882048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.882075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.882256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.882293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.882452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.882480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.882680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.882707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.882860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.882884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.883066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.883090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.883263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.883290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.883462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.883485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.883664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.883701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.883872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.883898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.884050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.884074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.884240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.884268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.884429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.884456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.884624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.884647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.884805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.884829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.885016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.885043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.885251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.885275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.885477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.885505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.885675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.885702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.885841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.885865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.886025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.886048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.886243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.886275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.923 [2024-07-15 23:28:35.886437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.923 [2024-07-15 23:28:35.886460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.923 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.886639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.886666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.886800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.886827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.886991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.887189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.887347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.887581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.887773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.887952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.887980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.888205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.888229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.888401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.888428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.888617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.888645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.888801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.888824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.888965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.888989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.889181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.889208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.889391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.889414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.889633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.889661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.889851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.889875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.889983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.890006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.890172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.890214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.890394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.890421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.890612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.890634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.890838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.890866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.891972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.891999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.892147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.892175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.892329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.892351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.892483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.892521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.892686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.892713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.892848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.892871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.893974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.894179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.894207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.894390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.924 [2024-07-15 23:28:35.894417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.924 qpair failed and we were unable to recover it. 00:25:20.924 [2024-07-15 23:28:35.894616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.894643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.894812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.894837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.895946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.895970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.896166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.896193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.896365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.896387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.896550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.896577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.896779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.896806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.896971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.896995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.897172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.897199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.897363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.897390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.897558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.897580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.897760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.897813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.897964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.897992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.898154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.898177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.898362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.898389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.898578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.898605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.898780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.898802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.898956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.898984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.899134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.899161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.899321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.899349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.899518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.899746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.899898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.899921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.900069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.900107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.900254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.900491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.900513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.900671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.900698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.900847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.900871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.925 [2024-07-15 23:28:35.901026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.925 [2024-07-15 23:28:35.901049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.925 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.901235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.901263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.901390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.901418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.901567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.901590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.901824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.901852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.901969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.901996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.902154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.902192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.902360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.902387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.902552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.902579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.902752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.902775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.902945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.902972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.903131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.903159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.903328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.903527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.903555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.903731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.903763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.903912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.903935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.904110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.904133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.904259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.904500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.904522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.904728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.904764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.904897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.904925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.905100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.905314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.905697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.905877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.905996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.906038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.906245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.906268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.906470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.906509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.906695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.906722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.906879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.906904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.907039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.907062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.907204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.907231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.907446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.907468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.907622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.907649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.907832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.907860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.908046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.908286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.908498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.908681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.908871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.908993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.926 [2024-07-15 23:28:35.909021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.926 qpair failed and we were unable to recover it. 00:25:20.926 [2024-07-15 23:28:35.909202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.909225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.909396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.909423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.909601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.909628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.909787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.909811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.909952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.909979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.910195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.910222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.910409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.910431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.910643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.910671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.910820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.910848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.911099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.911121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.911329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.911357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.911516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.911543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.911747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.911943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.911971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.912119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.912146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.912319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.912340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.912512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.912768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.912795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.912919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.912942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.913076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.913098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.913288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.913314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.913476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.913497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.913674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.913700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.913846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.914967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.914994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.915153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.915178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.915356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.915393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.915574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.915601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.915784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.915809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.915944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.915971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.916116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.916325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.916501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.916677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.916857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.916984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.917009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.927 [2024-07-15 23:28:35.917157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.927 [2024-07-15 23:28:35.917184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.927 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.917392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.917416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.917625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.917653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.917823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.917848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.917982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.918170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.918360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.918563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.918758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.918941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.918968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.919148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.919173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.919325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.919352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.919570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.919597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.919775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.919799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.919921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.919963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.920117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.920145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.920282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.920307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.920465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.920505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.920639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.920666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.920853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.920878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.921957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.921981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.922118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.922143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.922326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.922353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.922511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.922536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.922699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.922726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.922906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.922931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.923119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.923286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.923473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.923633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.923842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.923975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.924139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.924325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.924529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.924711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.924877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.928 [2024-07-15 23:28:35.924902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.928 qpair failed and we were unable to recover it. 00:25:20.928 [2024-07-15 23:28:35.925050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.925088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.925282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.925307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.925488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.925516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.925704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.925732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.925880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.925905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.926090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.926117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.926281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.926308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.926478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.926503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.926656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.926684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.926859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.926885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.927061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.927282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.927476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.927847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.927982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.928016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.928190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.928217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.928466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.928494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.928764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.928797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.928965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.929147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.929175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.929443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.929470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.929717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.929748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.929905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.929932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.930066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.930094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.930285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.930455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.930482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.930750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.930778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.930971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.930999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.931188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.931215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.931408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.931435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.931666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.931690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.931854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.931882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.932072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.932099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.932320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.932344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.929 qpair failed and we were unable to recover it. 00:25:20.929 [2024-07-15 23:28:35.932595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.929 [2024-07-15 23:28:35.932622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.932870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.932895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.933049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.933074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.933229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.933257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.933448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.933476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.933662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.933686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.933871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.933898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.934091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.934259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.934456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.934661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.934858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.934982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.935023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.935180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.935207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.935415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.935440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.935632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.935659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.935856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.935884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.936075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.936100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.936316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.936534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.936562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.936769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.936795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.936937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.936964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.937138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.937165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.937358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.937565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.937592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.937725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.937760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.937888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.937912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.938046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.938071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.938204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.938231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.938427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.938451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.938603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.938630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.938844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.938869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.939032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.939261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.939445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.939652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.939844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.939982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.940015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.940208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.940232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.940411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.940438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.940597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.940624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.930 [2024-07-15 23:28:35.940754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.930 [2024-07-15 23:28:35.940779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.930 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.940906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.940930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.941112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.941139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.941328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.941352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.941476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.941517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.941725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.941760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.941915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.941940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.942119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.942146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.942333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.942361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.942518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.942543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.942732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.942772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.942920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.942948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.943145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.943169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.943323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.943350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.943476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.943504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.943672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.943699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.943872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.943897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.944087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.944115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.944294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.944318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.944514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.944541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.944727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.944777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.944940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.945156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.945183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.945368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.945395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.945605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.945630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.945798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.945826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.945963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.945991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.946153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.946178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.946354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.946381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.946497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.946525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.946722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.946762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.946889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.946916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.947077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.947105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.947300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.947324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.947507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.947535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.947775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.947814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.947971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.947995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.948177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.948204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.948384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.948421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.948615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.948842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.948870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.949069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.931 [2024-07-15 23:28:35.949096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.931 qpair failed and we were unable to recover it. 00:25:20.931 [2024-07-15 23:28:35.949299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.949323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.949524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.949551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.949744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.949774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.949942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.949967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.950159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.950186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.950348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.950376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.950544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.950571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.950788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.950813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.951006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.951047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.951314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.951338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.951571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.951598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.951797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.951825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.952028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.952053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.952224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.952263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.952480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.952507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.952702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.952727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.952867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.952894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.953100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.953128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.953318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.953343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.953568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.953796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.953828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.954026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.954051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.954183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.954211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.954374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.954401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.954590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.954614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.954762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.954797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.955004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.955031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.955231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.955255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.955435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.955648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.955675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.955885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.955910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.956130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.956157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.956307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.956335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.956506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.956530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.956718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.956752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.956959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.956993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.957151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.957176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.957361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.957388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.957568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.957595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.957758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.957784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.957951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.957978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.932 qpair failed and we were unable to recover it. 00:25:20.932 [2024-07-15 23:28:35.958128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.932 [2024-07-15 23:28:35.958155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.958349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.958373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.958561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.958588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.958801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.958829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.959001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.959026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.959190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.959217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.959404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.959435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.959638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.959663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.959862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.959890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.960056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.960264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.960461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.960811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.960982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.961009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.961169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.961196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.961386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.961411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.961603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.961631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.961805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.961833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.961978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.962865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.962999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.963023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.963194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.963221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.963408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.963436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.963621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.963646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.963842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.963870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.964044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.964071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.933 [2024-07-15 23:28:35.964314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.933 [2024-07-15 23:28:35.964338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.933 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.964483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.934 [2024-07-15 23:28:35.964510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.934 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.964704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.934 [2024-07-15 23:28:35.964731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.934 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.964949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.934 [2024-07-15 23:28:35.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.934 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.965186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.934 [2024-07-15 23:28:35.965214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.934 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.965407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.934 [2024-07-15 23:28:35.965434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.934 qpair failed and we were unable to recover it. 00:25:20.934 [2024-07-15 23:28:35.965571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.965595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.965793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.965821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.965978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.966005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.966264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.966492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.966519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.966707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.966735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.966965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.966990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.967179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.967206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.967366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.967394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.967519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.967543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.967694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.967722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.967879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.967906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.968104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.968128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.968253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.968280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.968434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.968461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.968667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.968694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.968894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.968928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.969158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.969185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.969306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.969330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.969513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.969555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.969709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.969745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.969923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.969947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.970150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.970177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.970347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.970374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.970579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.970603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.970791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.970819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.971033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.971209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.971408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.971625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.971845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.971977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.972005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.972192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.972219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.972388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.972412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.972575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.972603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.972789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.972817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.973002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.973027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.973243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.973275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.973430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.973457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.973638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.935 [2024-07-15 23:28:35.973662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.935 qpair failed and we were unable to recover it. 00:25:20.935 [2024-07-15 23:28:35.973822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.973850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.974039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.974067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.974291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.974315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.974535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.974562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.974707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.974735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.974926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.974951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.975143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.975170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.975434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.975462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.975627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.975654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.975834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.975867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.976096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.976123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.976359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.976384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.976558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.976586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.976855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.976880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.977065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.977089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.977278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.977305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.977549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.977577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.977769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.977794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.977950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.977978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.978132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.978160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.978346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.978371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.978525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.978552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.978769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.978797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.979040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.979065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.979208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.979240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.979437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.979464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.979641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.979665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.979920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.979948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.980153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.980180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.980387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.980412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.980577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.980604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.980793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.980821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.981007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.981045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.981251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.981278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.981498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.981525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.981768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.981792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.981996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.982023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.982181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-07-15 23:28:35.982209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.936 qpair failed and we were unable to recover it. 00:25:20.936 [2024-07-15 23:28:35.982443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.982465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.982746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.982774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.983047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.983074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.983310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.983333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.983578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.983605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.983792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.983820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.983967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.983992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.984189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.984216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.984447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.984474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.984698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.984735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.984914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.984941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.985131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.985158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.985379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.985402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.985567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.985591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.985798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.985823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.986053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.986102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.986306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.986333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.986565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.986592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.986871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.986895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.987148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.987175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.987460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.987509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.987788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.987813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.988058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.988086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.988349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.988599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.988621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.988840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.988868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.989082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.989110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.989386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.989409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.989661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.989688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.989940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.989963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.990221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.990243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.990530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.990557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.990836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.990864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.991053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.991075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.991231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.991258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.991446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.991473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.991641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.991664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.991884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-07-15 23:28:35.991912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.937 qpair failed and we were unable to recover it. 00:25:20.937 [2024-07-15 23:28:35.992184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.992212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.992396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.992430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.992723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.992758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.993022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.993050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.993341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.993363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.993619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.993647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.993836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.993863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.994035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.994058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.994180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.994218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.994430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.994484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.994709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.994732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.995005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.995032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.995218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.995245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.995463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.995485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.995649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.995677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.995830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.995858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.996122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.996147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.996379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.996406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.996593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.996621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.996905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.996928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.997156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.997184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.997454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.997481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.997685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.997712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.997920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.997945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.998147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.998174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.998345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.998367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.998633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.998660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.998860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.998897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.999134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.999156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.999378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.999404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.999636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.999663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:35.999927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:35.999956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.000235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.000262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.000447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.000479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.000660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.000682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.000879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.000907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.001183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.001210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.001469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.001497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.001682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.001709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.001918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.001941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.002118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.002141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.938 qpair failed and we were unable to recover it. 00:25:20.938 [2024-07-15 23:28:36.002382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-07-15 23:28:36.002409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.002607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.002635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.002782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.002809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.002986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.003018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.003208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.003236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.003415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.003437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.003626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.003661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.003799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.003832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.004025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.004048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.004220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.004247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.004457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.004484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.004669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.004691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.004968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.004996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.005210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.005238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.005465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.005487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.005731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.005766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.006040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.006068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.006353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.006375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.006684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.006734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.006985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.007012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.007261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.007283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.007446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.007474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.007682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.007710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.007854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.007878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.008041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.008064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.008326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.008353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.008624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.008646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.008874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.008898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.009195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.009222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.009489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.009512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.009734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.009769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.939 [2024-07-15 23:28:36.009979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.939 [2024-07-15 23:28:36.010001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.939 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.010273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.010296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.010512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.010539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.010733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.010775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.011031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.011326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.011354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.011500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.011528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.011783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.011807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.011980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.012007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.012155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.012182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.012370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.012393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.012624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.012652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.012900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.012929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.013190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.013212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.013388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.013415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.013644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.013671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.013895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.013918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.014177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.014204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.014403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.014430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.014686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.014709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.014986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.015013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.015173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.015201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.015393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.015415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.015692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.015719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.015957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.015984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.016252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.016274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.016497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.016524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.016717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.016753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.016997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.017035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.017202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.017229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.017390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.017417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.017635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.017657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.017871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.017899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.018093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.018126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.018391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.018413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.018648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.018675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.018909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.018937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.019216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.019238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.019513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.019539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.019754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.019786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.020046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.020069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.020346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.940 [2024-07-15 23:28:36.020373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.940 qpair failed and we were unable to recover it. 00:25:20.940 [2024-07-15 23:28:36.020539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.020566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.020734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.020761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.021019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.021046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.021281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.021308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.021510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.021532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.021761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.021789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.021919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.021946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.022157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.022179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.022379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.022416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.022652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.022679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.022921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.022944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.023237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.023265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.023490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.023735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.023770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.023983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.024007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.024233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.024260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.024520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.024542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.024827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.024850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.025126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.025153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.025369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.025391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.025632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.025659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.025847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.025875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.026109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.026132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.026413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.026440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.026677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.026708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.026975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.026998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.027284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.027311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.027589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.027616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.027893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.027915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.028149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.028177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.028423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.028450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.028629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.028651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.028880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.028908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.029177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.029204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.029480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.029502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.029765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.029793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.029960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.029987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.941 [2024-07-15 23:28:36.030179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.941 [2024-07-15 23:28:36.030201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.941 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.030439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.030466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.030669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.030696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.030898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.030922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.031124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.031431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.031458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.031707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.031734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.032045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.032073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.032352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.032378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.032664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.032711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.032996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.033037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.033275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.033503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.033525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.033764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.033804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.034005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.034053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.034246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.034268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.034546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.034573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.034863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.034891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.035132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.035154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.035424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.035452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.035674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.035701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.035889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.035912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.036151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.036178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.036392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.036562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.036584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.036823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.036851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.037088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.037115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.037383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.037405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.037654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.037681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.037959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.037984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.038308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.038368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.038643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.038670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.038894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.038921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.039150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.039172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.039303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.039330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.039488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.039515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.039778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.039801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.040091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.040118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.040321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.040348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.040603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.040626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.040904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.040932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.041168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.942 [2024-07-15 23:28:36.041195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.942 qpair failed and we were unable to recover it. 00:25:20.942 [2024-07-15 23:28:36.041473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.041495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.041692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.041719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.041992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.042020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.042282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.042304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.042552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.042579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.042804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.042832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.043073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.043311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.043338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.043528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.043557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.043755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.043778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.043976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.044245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.044272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.044543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.044566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.044832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.044857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.045123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.045150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.045411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.045433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.045681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.045728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.046015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.046042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.046289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.046542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.046569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.046720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.046754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.046985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.047009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.047246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.047273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.047506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.047533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.047764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.047787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.048010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.048037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.048306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.048333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.048541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.048563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.048773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.048801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.048963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.048990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.049203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.049225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.049499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.049527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.049713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.049757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.049987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.050010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.050250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.943 [2024-07-15 23:28:36.050277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.943 qpair failed and we were unable to recover it. 00:25:20.943 [2024-07-15 23:28:36.050481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.050509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.050780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.050802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.051061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.051088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.051225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.051253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.051391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.051414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.051661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.051693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.051872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.051897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.052077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.052100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.052310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.052337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.052498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.052526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.052690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.052713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.052903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.052931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.053087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.053114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.053252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.053290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.053441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.053483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.053640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.053667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.053863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.053888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.054075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.054102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.054260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.054288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.054456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.054480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.054657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.054685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.054884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.054909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.055059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.055083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.055246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.055274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.055462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.055489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.055650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.055673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.055855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.055883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.056038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.056066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.056219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.056242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.056450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.056477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.056639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.056667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.056896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.056921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.057100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.057132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.057302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.057329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.057498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.057521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.057692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.057720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.057874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.057901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.058127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.058368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.058396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.058544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.058571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.058791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.944 [2024-07-15 23:28:36.058816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.944 qpair failed and we were unable to recover it. 00:25:20.944 [2024-07-15 23:28:36.058970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.059169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.059365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.059553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.059772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.059946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.059970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.060172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.060201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.060399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.060436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.060605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.060632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.060836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.060861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.060981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.061006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.061195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.061218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.061404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.061591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.061619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.061847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.061880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.062082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.062109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.062239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.062266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.062467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.062490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.062686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.062713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.062899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.062927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.063097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.063121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.063281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.063322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.063582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.063609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.063848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.063872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.064051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.064228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.064424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.064659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.064843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.064970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.065013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.065208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.065236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.065385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.065413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.065654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.065677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.065877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.065904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.066073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.066100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.066343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.066365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.066578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.066616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.066798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.066822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.066954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.066978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.067206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.067233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.067388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.945 [2024-07-15 23:28:36.067420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.945 qpair failed and we were unable to recover it. 00:25:20.945 [2024-07-15 23:28:36.067608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.067630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.067837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.067866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.068073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.068100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.068271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.068293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.068495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.068523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.068645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.068673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.068823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.068848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.069024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.069052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.069239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.069267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.069429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.069452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.069627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.069655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.069799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.069826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.070025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.070048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.070237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.070421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.070448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.070665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.070688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.070870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.070898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.071077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.071269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.071467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.071612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.071820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.071995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.072037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.072201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.072228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.072430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.072453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.072612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.072639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.072796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.072822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.072968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.073001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.073170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.073197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.073331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.073359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.073553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.073576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.073778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.073816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.073978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.074175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.074397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.074583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.074746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.074916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.074957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.075144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.075171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.075357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.075379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.075537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.075565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.946 [2024-07-15 23:28:36.075692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.946 [2024-07-15 23:28:36.075720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.946 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.075881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.075906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.076034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.076059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.076257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.076284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.076478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.076504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.076676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.076704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.076879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.076904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.077065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.077103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.077274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.077301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.077457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.077484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.077658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.077681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.077861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.077890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.078075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.078280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.078460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.078667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.078826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.078996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.079204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.079390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.079584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.079757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.079908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.079932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.080103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.080141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.080271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.080480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.080518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.080707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.080734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.080907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.080935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.081093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.081115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.081247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.081284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.081455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.081483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.081675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.081707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.081871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.081897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.082080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.082108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.082243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.082280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.082455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.082483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.082635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.082663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.082857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.082882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.083038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.083062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.083258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.083286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.947 [2024-07-15 23:28:36.083413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.947 [2024-07-15 23:28:36.083451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.947 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.083626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.083653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.083807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.083832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.083970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.083994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.084187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.084214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.084370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.084398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.084563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.084586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.084723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.084771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.084947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.084974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.085130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.085153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.085335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.085363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.085502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.085529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.085670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.085708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.085843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.085885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.086902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.086943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.087120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.087148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.087303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.087325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.087500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.087528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.087714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.087749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.087892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.087917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.088099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.088126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.088308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.088335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.088525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.088548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.088733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.088782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.088937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.088961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.089120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.089142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.089326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.089353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.089502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.089712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.089734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.089908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.089936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.090105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.090133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.090297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.948 [2024-07-15 23:28:36.090319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.948 qpair failed and we were unable to recover it. 00:25:20.948 [2024-07-15 23:28:36.090457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.090499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.090653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.090681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.090791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.090816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.090977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.091162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.091371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.091549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.091688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.091876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.091901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.092100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.092127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.092276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.092481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.092503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.092627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.092667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.092827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.092855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.093066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.093286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.093463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.093649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.093833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.093975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.094144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.094363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.094571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.094730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.094922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.094948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.095087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.095278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.095465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.095621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.095804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.095972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.096174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.096337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.096498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.096714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.096878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.096902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.097068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.097110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.097257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.097284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.097434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.097471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.097627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.097649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.949 qpair failed and we were unable to recover it. 00:25:20.949 [2024-07-15 23:28:36.097801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.949 [2024-07-15 23:28:36.097829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.097981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.098142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.098326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.098538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.098747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.098932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.098955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.099106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.099129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.099276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.099303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.099448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.099480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.099634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.099671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.099813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.099853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.100946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.100973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.101154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.101181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.101341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.101363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.101547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.101717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.101760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.101905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.101929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.102099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.102138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.102283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.102310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.102455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.102492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.102671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.102693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.102903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.102927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.103092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.103115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.103283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.103310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.103435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.103462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.103636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.103672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.103869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.103898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.104081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.104108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.104266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.104288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.104469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.104496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.104643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.104671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.104852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.104876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.105059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.105087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.105261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.105288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.105485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.105507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.950 [2024-07-15 23:28:36.105671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.950 [2024-07-15 23:28:36.105698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.950 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.105854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.105878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.106057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.106276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.106453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.106636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.106864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.106999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.107039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.107187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.107209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.107388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.107416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.107593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.107621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.107805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.107830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.107996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.108210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.108398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.108587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.108797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.108956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.108980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.109118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.109141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.109328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.109355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.109501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.109524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.109663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.109701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.109852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.109877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.110969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.110996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.111155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.111177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.111347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.111374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.111523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.111550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.111690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.111727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.111917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.111944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.112121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.112148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.112306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.112328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.112495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.112521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.112685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.112712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.112876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.112900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.113071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.113098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.113254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.113281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.113550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.951 [2024-07-15 23:28:36.113573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.951 qpair failed and we were unable to recover it. 00:25:20.951 [2024-07-15 23:28:36.113770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.113798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.113952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.113980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.114147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.114170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.114374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.114401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.114593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.114620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.114804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.114828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.115044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.115071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.115268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.115295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.115498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.115521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.115692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.115719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.115874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.115898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.116105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.116127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.116276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.116310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.116497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.116524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.116640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.116663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.116870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.116897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.117068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.117095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.117304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.117331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.117527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.117554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.117714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.117748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.117938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.117962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.118169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.118202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.118362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.118390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.118559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.118582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.118801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.118829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.118955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.118992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.119185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.119207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.119415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.119442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.119561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.119588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.119750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.119774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.119887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.119912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.120101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.120128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.120288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.120310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.120505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.120532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.120687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.120714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.120911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.952 [2024-07-15 23:28:36.120934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.952 qpair failed and we were unable to recover it. 00:25:20.952 [2024-07-15 23:28:36.121160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.121186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.121318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.121345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.121498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.121535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.121713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.121747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.121919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.121942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.122053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.122091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.122286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.122500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.122527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.122727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.122778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.122936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.122963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.123129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.123156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.123360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.123382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.123604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.123635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.123831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.123859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.124059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.124081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.124315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.124342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.124543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.124570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.124815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.124837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.125069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.125096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.125250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.125445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.125467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.125673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.125700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.125849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.125873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.126961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.127149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.127171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.127355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.127383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.127581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.127608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.127824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.127847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.128043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.128082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.128308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.128335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.128575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.128601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.128826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.128853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.129059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.129087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.129284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.129306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.129566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.129754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.953 [2024-07-15 23:28:36.129781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.953 qpair failed and we were unable to recover it. 00:25:20.953 [2024-07-15 23:28:36.129963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.129986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.130134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.130161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.130358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.130386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.130590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.130826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.130854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.131101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.131131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.131258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.131280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.131487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.131522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.131759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.131805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.131930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.131953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.132156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.132195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.132453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.132480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.132700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.132722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.132902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.132929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.133076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.133103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.133309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.133331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.133538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.133566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.133755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.133789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.133970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.134001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.134201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.134229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.134384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.134655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.134677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.134903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.134932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.135194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.135221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.135410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.135432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.135699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.135727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.135903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.135931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.136148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.136170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.136412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.136439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.136652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.136835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.137925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.138093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.138121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.138301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.138328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.138596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.138623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.138865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.138893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.954 qpair failed and we were unable to recover it. 00:25:20.954 [2024-07-15 23:28:36.139124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.954 [2024-07-15 23:28:36.139151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.139414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.139437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.139714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.139748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.139938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.139966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.140176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.140198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.140485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.140513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.140792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.140821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.140985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.141009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.141152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.141193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.141380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.141407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.141657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.141680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.141889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.141917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.142092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.142119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.142322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.142345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.142556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.142583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.142732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.142767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.142917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.142940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.143089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.143127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.143245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.143273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.143432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.143454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.143663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.143691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.143862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.143887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.144047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.144070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.144231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.144258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.144448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.144476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.144672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.144885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.144908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.145123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.145150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.145344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.145366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.145541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.145569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.145695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.145723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.145876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.145901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.146077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.146104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.146260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.146288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.146465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.146488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.146660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.146687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.146864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.146888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.147021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.147059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.147238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.147266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.147454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.147481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.147611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.147649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.147854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.147883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.148074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.148327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.148350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.955 [2024-07-15 23:28:36.148533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.955 [2024-07-15 23:28:36.148561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.955 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.148710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.148751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.148892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.148915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.149067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.149116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.149299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.149326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.149520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.149542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.149720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.149755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.149913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.149940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.150104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.150127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.150284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.150325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.150476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.150504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.150632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.150670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.150834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.150875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.151029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.151057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.151276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.151298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.151515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.151543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.151719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.151754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.151941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.151966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.152147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.152174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.152318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.152346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.152524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.152556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.152729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.152781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.152938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.152966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.153164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.153187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.153389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.153416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.153605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.153632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.153808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.153832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.154061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.154089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.154239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.154266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.154454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.154476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.154662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.154689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.154809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.154833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.155952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.155991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.156129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.156152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.956 [2024-07-15 23:28:36.156314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.956 [2024-07-15 23:28:36.156338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.956 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.156524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.156552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.156703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.156725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.156922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.156949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.157107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.157134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.157321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.157343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.157574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.157601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.157752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.157780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.157932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.157956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.158093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.158132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.158271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.158303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.158489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.158511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.158663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.158691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.158885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.158909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.159056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.159078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.159266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.159294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.159516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.159543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.159732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.159778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.159903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.159931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.160085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.160240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.160465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.160487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.160677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.160704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.160882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.160906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.161094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.161282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.161309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.161476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.161498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.161693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.161720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.161910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.161934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.162103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.162126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.162272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.162299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.162450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.162477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.162655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.162683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.162840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.162865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.163043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.163230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.163420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.163627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.163809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.163977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.164193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.164389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.164600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.164786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.957 [2024-07-15 23:28:36.164950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.957 [2024-07-15 23:28:36.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.957 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.165143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.165170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.165348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.165376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.165570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.165716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.165749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.165952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.166141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.166163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.166358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.166385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.166526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.166553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.166777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.166815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.166986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.167013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.167159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.167187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.167380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.167402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.167581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.167609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.167782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.167810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.167983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.168154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.168340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.168512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.168693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.168856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.168888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.169075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.169113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.169278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.169305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.169452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.169479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.169629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.169666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.169859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.169887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.170939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.170963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.171133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.171161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.171330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.171352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.171515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.171543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.171692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.171719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.171868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.171891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.172957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.172980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.173120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.173158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.173304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.173332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.173472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.173508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.173689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.173914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.173941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.958 qpair failed and we were unable to recover it. 00:25:20.958 [2024-07-15 23:28:36.174112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.958 [2024-07-15 23:28:36.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.174314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.174341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.174483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.174509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.174696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.174719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.174930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.174957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.175080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.175107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.175282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.175304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.175494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.175521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.175672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.175700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.175870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.175895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.176062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.176089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.176273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.176299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.176497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.176520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.176692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.176887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.176910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.177085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.177106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.177301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.177328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.177472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.177499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.177694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.177715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.177894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.177921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.178882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.178922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.179078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.179105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.179305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.179327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.179511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.179538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.179647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.179673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.179857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.179881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.180088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.180292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.180467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.180653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.180795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.180978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.181188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.181333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.181504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.181661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.181863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.181888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.959 [2024-07-15 23:28:36.182915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.959 [2024-07-15 23:28:36.182942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.959 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.183124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.183145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.183289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.183314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.183463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.183490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.183668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.183691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.183832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.183872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.184022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.184233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.184256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.184486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.184513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.184756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.184784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.184916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.184938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.185227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.185254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.185478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.185504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.185696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.185717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.185866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.185908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.186155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.186182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.186359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.186381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.186679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.186728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.186899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.186926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.187052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.187090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.187275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.187305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.187539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.187566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.187758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.187800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.187926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.187949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.188143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.188170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.188356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.188384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.188614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.188842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.188870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.189043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.189064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.189303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.189330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.189481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.189507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.189802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.189826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.190003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.190044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.190217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.190243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.190506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.190528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.190796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.190818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.191064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.191318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.191543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.191695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.191853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.191978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.192002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.192160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.192187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.960 qpair failed and we were unable to recover it. 00:25:20.960 [2024-07-15 23:28:36.192347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.960 [2024-07-15 23:28:36.192369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.192577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.192604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.192832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.192861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.193044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.193069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.193280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.193307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.193442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.193469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.193762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.193790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.193974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.194001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.194148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.194175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.194399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.194421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.194638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.194666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.194839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.194866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.195102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.195123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.195342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.195368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.195597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.195624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.195828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.195851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.196057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.196084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.196401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.196428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.196700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.196727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.196925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.196949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.197107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.197135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.197326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.197569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.197596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.197863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.197891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.198117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.198139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.198368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.198395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.198592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.198619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.198826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.198849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.199094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.199121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.199340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.199367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.199527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.199549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.199781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.199808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.199964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.199991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.200158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.200181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.200343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.200370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.200548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.200574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.200766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.200803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.200940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.200963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.201128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.201156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.201323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.201345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.201517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.201543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.201803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.201831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.202006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.202043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.202195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.202226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.202497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.202524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.202750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.202798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.202991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.203031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.203313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.203340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.961 [2024-07-15 23:28:36.203557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.961 [2024-07-15 23:28:36.203579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.961 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.203759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.203787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.204043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.204071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.204274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.204296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.204487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.204515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.204695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.204726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.204883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.204906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.205079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.205119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.205346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.205372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.205660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.205684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.205933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.205961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.206199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.206226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.206495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.206520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.206727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.206760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.206920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.206948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.207188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.207214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.207442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.207469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.207669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.207697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.207854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.207880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.208022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.208064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.208283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.208311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.208533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.208557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.208781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.208818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.208987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.209014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.209244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.209282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.209477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.209504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.209719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.209763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.209945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.209970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.210157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.210195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.210392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.210419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.210700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.210751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.210975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.211008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.211193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.211220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.211505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.211751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.211804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.211984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.212008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.212177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.212224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.212467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.212494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.212683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.212711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.212930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.212955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.213099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.213126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.213413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.213677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.213702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.213895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.213922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.214110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.214137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.214357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.214382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.214626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.214653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.214906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.214934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.215280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.962 [2024-07-15 23:28:36.215322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.962 qpair failed and we were unable to recover it. 00:25:20.962 [2024-07-15 23:28:36.215555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.215582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.215807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.215835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.215997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.216025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.216215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.216242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.216460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.216487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.216767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.216801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.216987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.217015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.217279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.217307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.217527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.217551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:20.963 [2024-07-15 23:28:36.217799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.963 [2024-07-15 23:28:36.217827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:20.963 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.218008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.218035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.218292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.218317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.218517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.218544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.218800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.218826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.218995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.219019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.219197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.219224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.219422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-15 23:28:36.219450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-15 23:28:36.219625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.219649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.219884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.219911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.220176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.220202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.220336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.220365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.220571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.220598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.220822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.220850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.221043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.221305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.221332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.221467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.221494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.221685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.221853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.221880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.222079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.222107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.222247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.222274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.222499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.222526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.222756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.222787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.222976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.223000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.223246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.223273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.223487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.223731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.223761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.223983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.224010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.224183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.224210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.224479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.224504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.224804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.224829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.224980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.225004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.225182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.225205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.225355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.225382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.225644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.225671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.225880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.225905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.226111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.226138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.226270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.226297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.226445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.226469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.226652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.226838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.226863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.227011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.227036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.227223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.227251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.227417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.227444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.227682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.227707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.227880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.227908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.228172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.228200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.228447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.228472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.228703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-15 23:28:36.228731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-15 23:28:36.228907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.228934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.229104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.229129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.229292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.229319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.229547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.229575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.229720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.229750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.229882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.229925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.230082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.230109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.230246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.230281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.230516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.230543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.230766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.230794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.230949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.230973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.231189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.231216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.231366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.231634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.231689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.231890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.231915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.232089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.232116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.232391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.232415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.232658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.232686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.232883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.232908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.233075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.233099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.233347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.233374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.233573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.233599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.233810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.233835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.233986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.234020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.234273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.234300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.234492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.234527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.234807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.234834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.234990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.235017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.235175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.235200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.235433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.235460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.235616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.235643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.235815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.235990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.236017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.236196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.236222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.236447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.236470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.236638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.236665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.236848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.236887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.237044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.237068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.237265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.237297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.237559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.237590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.237795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-15 23:28:36.237820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-15 23:28:36.237976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.238009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.238234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.238262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.238540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.238564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.238818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.238844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.238995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.239034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.239210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.239235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.239495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.239522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.239711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.239745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.239889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.239913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.240175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.240202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.240429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.240456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.240662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.240687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.240864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.240892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.241066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.241093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.241322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.241347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.241604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.241631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.241879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.241907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.242160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.242184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.242450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.242478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.242671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.242699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.242906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.243155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.243182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.243359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.243386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.243573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.243598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.243847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.243874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.244012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.244039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.244205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.244230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.244376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.244417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.244665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.244692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.244849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.244874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.245044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.245072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.245243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.245269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.245439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.245463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.245708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.245744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.245983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.246007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.246197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.246222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.246397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.246435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.246650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.246677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.246841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.246865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.247009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.247050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.247210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.247238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-15 23:28:36.247483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-15 23:28:36.247508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.247662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.247690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.247910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.247935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.248200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.248224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.248500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.248527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.248721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.248754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.248983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.249007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.249221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.249249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.249417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.249444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.249600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.249624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.249905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.249933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.250191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.250218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.250403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.250428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.250585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.250623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.250865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.250893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.251115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.251139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.251283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.251309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.251532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.251558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.251744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.251768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.251980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.252007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.252235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.252262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.252517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.252541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.252772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.252800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.253060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.253087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.253338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.253363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.253622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.253654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.253809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.253837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.253967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.253991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.254159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.254199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.254478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.254505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.254776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.254817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.255081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.255306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.255333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.255521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.255549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.255792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.255817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.255981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.256231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.256380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.256545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.256771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.256949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.242 [2024-07-15 23:28:36.256975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.242 qpair failed and we were unable to recover it. 00:25:21.242 [2024-07-15 23:28:36.257222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.257250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.257530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.257555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.257833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.257861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.258129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.258156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.258392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.258416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.258645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.258693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.258932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.258957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.259207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.259231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.259432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.259459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.259683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.259710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.259962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.259986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.260201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.260233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.260384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.260411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.260687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.260712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.260968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.260996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.261248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.261275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.261494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.261519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.261762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.261790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.262021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.262048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.262241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.262265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.262459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.262485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.262648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.262841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.262865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.263013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.263054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.263299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.263327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.263560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.263585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.263843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.263871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.264093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.264120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.264324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.264348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.264594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.264621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.264774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.264802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.264951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.264975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.265245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.265273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.265477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.243 [2024-07-15 23:28:36.265505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.243 qpair failed and we were unable to recover it. 00:25:21.243 [2024-07-15 23:28:36.265771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.265812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.265987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.266021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.266299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.266520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.266544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.266715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.266763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.267060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.267218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.267442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.267592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.267761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.267981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.268197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.268225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.268499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.268524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.268715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.268754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.268924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.268958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.269227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.269252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.269522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.269549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.269763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.269800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.269969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.270002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.270249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.270276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.270445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.270713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.270743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.270978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.271006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.271223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.271250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.271494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.271518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.271683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.271710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.271942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.271966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.272231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.272256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.272500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.272528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.272804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.272832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.273009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.273033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.273217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.273244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.273495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.273522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.273689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.273713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.273879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.273906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.274080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.274107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.274355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.274384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.274659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.274706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.274873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.274906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.275173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.275198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.275424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.244 [2024-07-15 23:28:36.275451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.244 qpair failed and we were unable to recover it. 00:25:21.244 [2024-07-15 23:28:36.275679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.275706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.275931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.275956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.276240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.276267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.276550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.276577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.276825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.276850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.277088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.277115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.277385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.277412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.277730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.277760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.277955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.277982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.278242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.278269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.278463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.278487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.278727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.278761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.279034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.279061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.279328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.279353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.279637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.279664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.279933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.279961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.280180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.280204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.280356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.280383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.280606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.280634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.280880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.280905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.281165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.281192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.281461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.281488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.281750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.281778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.282038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.282064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.282220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.282247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.282400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.282423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.282681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.282729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.283004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.283046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.283321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.283346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.283589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.283615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.283888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.283916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.284150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.284178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.284439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.284465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.284750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.284777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.285006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.285031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.285231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.285257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.285502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.285529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.285727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.285758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.285961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.285988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.286217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.286245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.286478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.245 [2024-07-15 23:28:36.286502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.245 qpair failed and we were unable to recover it. 00:25:21.245 [2024-07-15 23:28:36.286718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.286752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.286870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.286897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.287083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.287108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.287253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.287280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.287430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.287458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.287736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.287773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.288019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.288045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.288284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.288311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.288535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.288559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.288702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.288728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.288998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.289025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.289297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.289321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.289598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.289625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.289895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.290152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.290177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.290423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.290450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.290642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.290670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.290924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.290952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.291151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.291178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.291451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.291477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.291699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.291724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.291974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.292001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.292198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.292225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.292373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.292397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.292534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.292574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.292838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.292867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.293121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.293146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.293296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.293324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.293439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.293704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.293728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.293961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.293989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.294121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.294148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.294388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.294412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.294642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.294669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.294901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.294928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.295150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.295175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.295454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.295764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.295792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.296099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.296139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.296410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.296438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.296634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.296661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.246 [2024-07-15 23:28:36.296836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.246 [2024-07-15 23:28:36.296861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.246 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.297063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.297090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.297332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.297359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.297554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.297578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.297806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.297831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.298065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.298092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.298373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.298398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.298683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.298730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.299022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.299049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.299276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.299300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.299573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.299600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.299872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.299901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.300149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.300174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.300301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.300328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.300526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.300553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.300690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.300714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.300865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.300906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.301072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.301126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.301411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.301438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.301719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.301757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.302077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.302106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.302384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.302409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.302728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.302767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.302953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.302978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.303209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.303234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.303526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.303770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.303798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.303990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.304014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.304248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.304276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.304570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.304616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.304880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.304914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.305124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.305412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.305459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.305683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.305708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.305902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.305927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.306094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.247 [2024-07-15 23:28:36.306137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.247 qpair failed and we were unable to recover it. 00:25:21.247 [2024-07-15 23:28:36.306362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.306389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.306629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.306676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.306896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.306922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.307137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.307164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.307426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.307450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.307646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.307694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.307974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.307999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.308229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.308277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.308573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.308598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.308879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.308908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.309157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.309207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.309491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.309540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.309828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.309853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.310147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.310174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.310463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.310512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.310755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.310783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.311026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.311051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.311315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.311342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.311581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.311629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.311914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.311942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.312209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.312512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.312543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.312773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.312801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.312927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.312954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.313117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.313138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.313387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.313414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.313694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.313749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.314055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.314078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.314360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.314382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.314669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.314696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.314948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.315230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.315257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.315522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.315545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.315825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.315853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.316078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.316105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.316298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.316335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.316554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.316576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.316844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.316873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.317155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.317208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.317490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.317541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.317800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.248 [2024-07-15 23:28:36.317825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.248 qpair failed and we were unable to recover it. 00:25:21.248 [2024-07-15 23:28:36.318111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.318356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.318406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.318611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.318660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.318949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.318974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.319213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.319485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.319533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.319811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.319839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.320077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.320100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.320348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.320375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.320562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.320611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.320883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.320911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.321178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.321200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.321453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.321480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.321710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.321744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.322022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.322049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.322259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.322282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.322465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.322492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.322714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.322747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.322995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.323022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.323266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.323288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.323498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.323525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.323749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.323782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.324022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.324050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.324213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.324236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.324417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.324444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.324613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.324640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.324907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.324932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.325145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.325168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.325361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.325389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.325674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.325722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.326010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.326039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.326261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.326283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.326487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.326515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.326708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.326735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.327012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.327039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.327277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.327299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.327526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.327553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.327798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.327857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.328096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.328123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.328383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.328406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.328714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.249 [2024-07-15 23:28:36.328998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.249 [2024-07-15 23:28:36.329026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.249 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.329293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.329320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.329548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.329570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.329852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.329880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.330150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.330177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.330412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.330439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.330665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.330688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.330990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.331023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.331314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.331364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.331650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.331677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.331870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.331894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.332168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.332195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.332423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.332472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.332679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.332706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.332988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.333027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.333335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.333362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.333641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.333689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.333961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.333985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.334179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.334206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.334440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.334467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.334703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.334730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.335019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.335047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.335315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.335337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.335615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.335642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.335918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.335946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.336225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.336252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.336481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.336503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.336760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.336788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.337068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.337095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.337322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.337349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.337628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.337650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.337925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.337953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.338234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.338282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.338549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.338576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.338803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.338830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.339064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.339092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.339339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.339389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.339659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.339686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.339919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.339943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.340186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.340214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.340455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.250 [2024-07-15 23:28:36.340804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.250 [2024-07-15 23:28:36.340832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.250 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.341046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.341069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.341309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.341336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.341616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.341665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.341935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.341964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.342240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.342262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.342556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.342583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.342827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.342855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.343125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.343152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.343423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.343445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.343726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.343762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.344056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.344084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.344367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.344394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.344662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.344684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.344933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.344961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.345254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.345310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.345594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.345621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.345900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.345924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.346185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.346212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.346504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.346552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.346793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.346821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.347053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.347075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.347306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.347332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.347617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.347668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.347934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.347962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.348243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.348265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.348500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.348527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.348772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.348800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.349051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.349079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.349350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.349372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.349648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.349676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.349959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.349986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.350225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.350252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.350523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.350545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.350825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.351129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.351157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.251 qpair failed and we were unable to recover it. 00:25:21.251 [2024-07-15 23:28:36.351431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.251 [2024-07-15 23:28:36.351458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.351687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.351710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.351961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.352271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.352318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.352553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.352580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.352810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.352833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.353118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.353145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.353436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.353486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.353754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.353782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.354069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.354092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.354334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.354361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.354612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.354659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.354933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.354962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.355237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.355259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.355506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.355533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.355808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.355836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.356100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.356127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.356399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.356422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.356699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.356727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.356971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.356998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.357229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.357256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.357487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.357510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.357806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.357835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.358037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.358064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.358345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.358373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.358666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.358694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.358992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.359020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.359301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.359351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.359623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.359932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.359955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.360192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.360219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.360525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.360583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.360855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.360884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.361164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.361187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.361480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.361507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.361732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.361776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.362046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.362074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.362241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.362263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.362540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.362566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.362846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.362875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.363087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.363114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.252 qpair failed and we were unable to recover it. 00:25:21.252 [2024-07-15 23:28:36.363366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.252 [2024-07-15 23:28:36.363389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.363646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.363960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.363988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.364191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.364218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.364450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.364473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.364754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.364781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.365060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.365087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.365329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.365356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.365630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.365653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.365895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.365923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.366217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.366264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.366538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.366569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.366787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.366810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.367027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.367054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.367299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.367350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.367570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.367597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.367815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.367838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.368043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.368070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.368295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.368345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.368629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.368656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.368938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.368962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.369250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.369277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.369520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.369567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.369837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.369865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.370102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.370125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.370403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.370430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.370715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.370773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.371046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.371073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.371344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.371366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.371613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.371641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.371874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.371902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.372167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.372194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.372467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.372489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.372763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.372792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.373069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.373096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.373366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.373393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.373669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.373691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.374024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.374054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.374264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.374313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.374590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.374617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.374847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.374870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.253 [2024-07-15 23:28:36.375121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.253 [2024-07-15 23:28:36.375149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.253 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.375386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.375436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.375712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.375752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.376035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.376345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.376586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.376634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.376863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.376891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.377151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.377173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.377427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.377455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.377724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.377759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.378030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.378057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.378297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.378568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.378595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.378882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.378910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.379183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.379210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.379487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.379509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.379714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.379747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.379988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.380015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.380296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.380323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.380600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.380622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.380868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.380896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.381137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.381183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.381460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.381487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.381727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.381770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.382006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.382034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.382282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.382328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.382559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.382586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.382860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.382883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.383124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.383151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.383430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.383478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.383714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.383748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.383942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.383965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.384250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.384277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.384562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.384609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.384888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.384916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.385156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.385178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.385413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.385440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.385710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.385744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.386014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.386046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.386322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.386345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.386604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.386631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.386846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.254 [2024-07-15 23:28:36.386874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.254 qpair failed and we were unable to recover it. 00:25:21.254 [2024-07-15 23:28:36.387141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.387167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.387436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.387457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.387692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.387953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.387980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.388251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.388278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.388559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.388581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.388751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.388778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.389004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.389031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.389311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.389579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.389601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.389887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.389915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.390215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.390263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.390505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.390532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.390779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.390802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.391087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.391114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.391406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.391456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.391724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.391758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.392045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.392067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.392346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.392374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.392595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.392645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.392925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.393189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.393211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.393435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.393463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.393730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.393771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.394013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.394040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.394303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.394326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.394604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.394631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.394862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.394890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.395159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.395186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.395451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.395473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.395751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.395779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.396017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.396320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.396347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.396572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.396595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.396874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.396901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.397139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.397186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.255 [2024-07-15 23:28:36.397415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.255 [2024-07-15 23:28:36.397443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.255 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.397679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.397701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.397984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.398012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.398311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.398640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.398667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.398954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.398977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.399268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.399295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.399491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.399541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.399764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.399791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.400069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.400091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.400375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.400402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.400671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.400719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.400982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.401009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.401277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.401300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.401491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.401523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.401757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.401784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.402005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.402032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.402313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.402336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.402590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.402617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.402903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.402931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.403175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.403202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.403408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.403430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.403655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.403682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.403964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.403991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.404281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.404308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.404500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.404522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.404758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.404786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.405016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.405043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.405317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.405344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.405578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.405841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.405868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.406165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.406220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.406444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.406471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.406702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.406746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.407041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.407068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.407350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.407400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.407689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.407716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.408005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.408028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.408268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.408295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.408584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.408631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.256 [2024-07-15 23:28:36.408934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.256 qpair failed and we were unable to recover it. 00:25:21.256 [2024-07-15 23:28:36.409177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.409198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.409485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.409512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.409746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.409774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.409967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.409994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.410259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.410281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.410573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.410600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.410871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.410899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.411141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.411167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.411407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.411429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.411715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.411749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.411998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.412025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.412260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.412287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.412569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.412592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.412835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.412863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.413144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.413194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.413463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.413490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.413715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.413742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.413991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.414018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.414261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.414311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.414593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.414621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.414897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.414920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.415165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.415192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.415446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.415494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.415722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.415758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.416000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.416036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.416268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.416295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.416559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.416609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.416883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.416911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.417133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.417156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.417443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.417470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.417742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.417770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.418054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.418081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.418316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.418338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.418622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.418649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.418829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.418857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.419080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.419107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.419374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.419396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.419679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.419706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.419955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.419982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.420264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.420291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.257 [2024-07-15 23:28:36.420568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.257 [2024-07-15 23:28:36.420590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.257 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.420854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.420888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.421174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.421222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.421442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.421469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.421711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.421732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.422029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.422058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.422345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.422394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.422628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.422655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.422927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.423198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.423226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.423521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.423568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.423850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.423878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.424116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.424138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.424413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.424440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.424683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.424733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.424959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.424987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.425219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.425241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.425518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.425546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.425820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.426085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.426112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.426377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.426399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.426648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.426675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.426885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.426912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.427129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.427156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.427377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.427399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.427690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.427717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.427991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.428019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.428287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.428314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.428545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.428571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.428812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.428840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.429072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.429099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.429320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.429347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.429618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.429640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.429889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.429917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.430160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.430206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.430476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.430502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.430782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.430805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.431100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.431127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.431418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.431472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.431745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.431773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.432045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.432067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.432359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.258 [2024-07-15 23:28:36.432386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.258 qpair failed and we were unable to recover it. 00:25:21.258 [2024-07-15 23:28:36.432672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.432720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.432962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.433257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.433279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.433559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.433586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.433864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.433892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.434164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.434191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.434415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.434437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.434723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.434758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.434997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.435024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.435265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.435291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.435565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.435587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.435864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.435891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.436141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.436188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.436469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.436496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.436772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.436794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.437069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.437096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.437317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.437365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.437635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.437662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.437929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.437952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.438176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.438203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.438488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.438538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.438808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.438836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.439107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.439129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.439359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.439679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.439728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.440014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.440042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.440311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.440333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.440621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.440648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.440905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.441143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.441170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.441450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.441472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.441744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.441772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.442010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.442037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.442231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.442258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.442502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.442524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.442773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.442801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.443016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.443043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.443312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.443338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.443612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.443634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.443912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.443940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.444162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.259 [2024-07-15 23:28:36.444209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.259 qpair failed and we were unable to recover it. 00:25:21.259 [2024-07-15 23:28:36.444486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.444513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.444756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.444778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.445019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.445046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.445344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.445396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.445622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.445648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.445858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.445881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.446179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.446207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.446489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.446535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.446766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.446794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.447021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.447043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.447324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.447351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.447577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.447627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.447836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.447863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.448136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.448162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.448440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.448467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.448703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.448730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.448948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.448975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.449241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.449263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.449492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.449519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.449733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.449766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.450043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.450070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.450338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.450360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.450610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.450639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.450898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.450925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.451150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.451177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.451439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.451461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.451713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.451746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.451972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.452000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.452229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.452256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.452431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.452453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.452678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.452705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.452938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.452965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.453165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.453192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.453433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.453455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.453703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.453730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.454034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.260 [2024-07-15 23:28:36.454058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.260 qpair failed and we were unable to recover it. 00:25:21.260 [2024-07-15 23:28:36.454333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.454360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.454601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.454622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.454862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.454890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.455178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.455228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.455499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.455530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.455787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.455812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.456036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.456063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.456289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.456340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.456582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.456609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.456845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.456870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.457092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.457119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.457339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.457388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.457672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.457699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.457950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.457975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.458256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.458284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.458571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.458648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.458923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.458951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.459218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.459256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.459551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.459578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.459850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.459877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.460158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.460185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.460466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.460491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.460769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.460796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.461073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.461100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.461330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.461357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.461624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.461650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.461878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.461906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.462162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.462210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.462433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.462460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.462735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.462768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.463020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.463047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.463240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.463304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.463578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.463605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.463876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.463901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.464166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.464193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.464440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.464491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.464715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.464748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.465031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.465056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.465331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.465358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.465649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.465698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.465917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.261 [2024-07-15 23:28:36.465941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.261 qpair failed and we were unable to recover it. 00:25:21.261 [2024-07-15 23:28:36.466184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.466209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.466509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.466536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.466816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.467078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.467106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.467330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.467354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.467621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.467648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.467872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.467900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.468138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.468166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.468405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.468429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.468691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.468718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.468959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.468986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.469185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.469440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.469464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.469698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.469725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.470015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.470043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.470311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.470338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.470567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.470592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.470871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.470899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.471186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.471260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.471496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.471523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.471752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.471776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.472059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.472085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.472330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.472379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.472608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.472635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.472909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.472934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.473154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.473182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.473433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.473483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.473763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.473791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.474061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.474086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.474370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.474398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.474640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.474690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.474982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.475008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.475268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.475293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.475529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.475556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.475855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.475883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.476115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.476142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.476407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.476431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.476698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.476726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.476978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.477006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.477275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.477302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.477581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.262 [2024-07-15 23:28:36.477606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.262 qpair failed and we were unable to recover it. 00:25:21.262 [2024-07-15 23:28:36.477926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.477955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.478249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.478305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.478501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.478529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.478809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.478834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.479095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.479122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.479372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.479421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.479698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.479725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.479945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.479969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.480233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.480260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.480550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.480609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.480889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.480917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.481129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.481154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.481323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.481350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.481554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.481603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.481873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.481901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.482140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.482164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.482441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.482469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.482745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.482778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.483010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.483038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.483308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.483332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.483551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.483578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.483808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.483836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.484072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.484100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.484377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.484401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.484660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.484688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.484957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.484986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.485258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.485285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.485481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.485506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.485773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.485801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.486071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.486099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.486330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.486357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.486643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.486668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.486951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.486980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.487266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.487312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.487588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.487615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.487890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.487914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.488183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.488211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.488493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.488543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.488827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.488854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.489120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.489144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.489410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.489437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.263 qpair failed and we were unable to recover it. 00:25:21.263 [2024-07-15 23:28:36.489725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.263 [2024-07-15 23:28:36.489786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.490066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.490093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.490370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.490394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.490634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.490666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.490944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.490971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.491218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.491245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.491476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.491501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.491771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.491799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.492088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.492115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.492298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.492326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.492548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.492572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.492852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.492879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.493162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.493210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.493450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.493477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.493750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.493775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.494051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.494077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.494359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.494409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.494688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.494715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.495005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.495030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.495319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.495346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.495637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.495684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.495936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.495961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.496221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.496245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.496475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.496502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.496733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.496768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.497002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.497029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.497283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.497307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.497537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.497564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.497835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.497863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.498147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.498174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.498442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.498466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.498755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.498783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.499045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.499072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.499295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.499322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.499555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.499580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.264 [2024-07-15 23:28:36.499826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.264 [2024-07-15 23:28:36.499854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.264 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.500045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.500072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.500305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.500332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.500603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.500628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.500899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.500928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.501167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.501224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.501469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.501496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.501766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.501790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.502054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.502082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.502345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.502404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.502682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.502709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.503003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.503028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.503273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.503300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.503553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.503604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.503897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.503925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.504193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.504491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.504519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.504726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.504762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.504993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.505020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.505259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.505284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.505555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.505583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.505839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.505867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.506103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.506130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.506408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.506432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.506680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.506707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.506950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.506978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.507246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.507273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.507543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.507567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.507844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.507872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.508151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.508178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.508451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.508478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.508752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.509049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.509076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.509324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.509371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.509650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.509677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.509916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.509941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.510152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.510184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.510433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.510481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.510762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.510790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.511022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.511047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.511320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.511347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.511612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.265 [2024-07-15 23:28:36.511661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.265 qpair failed and we were unable to recover it. 00:25:21.265 [2024-07-15 23:28:36.511892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.511921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.512195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.512218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.512473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.512501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.512780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.512808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.513049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.513076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.513357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.513379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.513652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.513679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.513975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.514003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.514241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.514268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.514540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.514562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.514857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.514885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.515170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.515197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.515474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.515501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.515783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.515807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.516043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.516070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.516321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.516370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.516662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.516690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.516937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.516961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.517216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.517243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.517476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.517523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.517805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.517832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.518103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.518129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.518347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.518373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.518612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.518661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.518928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.518956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.519199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.519221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.519499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.519526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.519799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.519827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.520061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.520088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.520353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.520375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.520654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.520682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.520951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.521217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.521244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.521470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.521492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.521750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.521778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.522073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.522100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.522304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.522331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.522609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.522631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.522916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.522944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.523237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.523284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.523560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.266 [2024-07-15 23:28:36.523586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.266 qpair failed and we were unable to recover it. 00:25:21.266 [2024-07-15 23:28:36.523878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.523902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.524153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.524180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.524467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.524517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.524752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.524780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.525013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.525035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.525327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.525355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.525636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.525685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.525964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.525991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.526234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.526256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.526492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.526519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.526789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.526817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.527053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.527080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.527347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.527369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.527573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.527600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.527874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.527902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.528179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.528206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.528453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.528475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.528776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.528804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.529083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.529110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.529305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.529331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.529577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.529599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.529858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.529886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.530166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.530215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.530480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.530507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.530746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.530771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.531013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.531040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.531339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.531389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.531668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.531716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.532014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.532039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.532322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.532350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.532633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.532682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.532960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.532985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.533273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.533582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.533609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.533788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.533816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.534059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.534086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.534359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.534398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.534676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.534703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.534929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.534957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.535234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.535261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.535500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.535524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.267 qpair failed and we were unable to recover it. 00:25:21.267 [2024-07-15 23:28:36.535806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.267 [2024-07-15 23:28:36.535833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.268 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.536109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.536136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.536384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.536411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.536653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.536677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.536978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.537006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.537222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.537249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.537488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.537515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.537804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.537830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.538089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.538117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.538378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.538406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.538639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.538666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.538860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.538884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.539105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.539132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.539402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.539429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.539694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.539721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.540004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.540042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.540324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.540351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.540603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.540651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.540879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.540907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.541184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.541207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.541426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.541453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.541706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.541733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.541985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.542013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.542284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.542306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.542583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.542611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.542861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.542890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.543126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.543153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.543438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.543460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.543757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.543785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.544018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.544045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.544265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.544292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.544576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.544598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.544898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.544926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.545219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.545274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.545482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.545513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.545790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.545813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.546056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.546082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.546366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.545 [2024-07-15 23:28:36.546650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.545 [2024-07-15 23:28:36.546677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.545 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.546946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.546970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.547228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.547256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.547533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.547581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.547811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.547839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.548110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.548356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.548383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.548633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.548679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.548961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.548988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.549272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.549294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.549593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.549620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.549789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.549816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.550057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.550084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.550363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.550385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.550600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.550627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.550901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.550929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.551206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.551233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.551488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.551511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.551758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.551786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.552063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.552091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.552318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.552345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.552624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.552646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.552866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.552893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.553186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.553238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.553510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.553537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.553769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.553792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.554073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.554100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.554335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.554384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.554669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.554696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.554981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.555304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.555331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.555578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.555626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.555898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.555926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.556168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.556191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.556476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.556504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.556776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.556803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.557075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.557102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.557378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.557401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.557698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.557726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.558016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.558044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.558277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.558304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.558491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.546 [2024-07-15 23:28:36.558513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.546 qpair failed and we were unable to recover it. 00:25:21.546 [2024-07-15 23:28:36.558755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.558783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.559053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.559080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.559357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.559384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.559655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.559677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.559965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.559992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.560235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.560284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.560480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.560722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.560764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.561041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.561069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.561359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.561408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.561684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.561711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.561990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.562013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.562304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.562330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.562623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.562672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.562953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.562981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.563281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.563320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.563592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.563619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.563886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.563914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.564150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.564178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.564418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.564440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.564677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.564704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.564949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.564977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.565225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.565252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.565522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.565544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.565835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.565862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.566093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.566120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.566390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.566417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.566651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.566673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.566921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.566948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.567197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.567246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.567524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.567551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.567770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.567793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.568050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.568077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.568358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.568408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.568680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.568707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.568985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.569008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.569254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.569282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.569554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.569602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.569822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.569850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.570077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.570099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.570384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.547 [2024-07-15 23:28:36.570411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.547 qpair failed and we were unable to recover it. 00:25:21.547 [2024-07-15 23:28:36.570695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.570756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.571027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.571054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.571271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.571570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.571597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.571816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.571844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.572115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.572143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.572414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.572437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.572664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.572691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.572978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.573011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.573245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.573272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.573542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.573564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.573784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.573812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.574051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.574078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.574315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.574342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.574577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.574599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.574892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.574920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.575176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.575226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.575494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.575521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.575792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.575815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.576098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.576125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.576362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.576645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.576672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.576947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.577215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.577242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.577495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.577543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.577790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.577817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.578036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.578058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.578313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.578340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.578625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.578673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.578901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.578929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.579174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.579196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.579486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.579514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.579784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.579812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.580091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.580119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.580320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.580342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.580627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.580659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.580941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.580969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.581205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.581232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.581511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.581533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.581801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.581840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.582072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.582099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.548 [2024-07-15 23:28:36.582326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.548 [2024-07-15 23:28:36.582353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.548 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.582636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.582658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.582950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.582978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.583232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.583282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.583524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.583551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.583754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.584032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.584060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.584332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.584359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.584653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.584680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.584920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.584944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.585214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.585241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.585530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.585578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.585860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.585888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.586151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.586173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.586453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.586480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.586713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.586755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.587029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.587056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.587333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.587355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.587631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.587659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.587942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.587969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.588201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.588228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.588497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.588523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.588763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.588790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.589028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.589056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.589335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.589362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.589631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.589653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.589936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.589964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.590251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.590300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.590488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.590515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.590755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.590779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.591058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.591085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.591375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.591424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.591708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.591735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.592029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.592051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.592315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.549 [2024-07-15 23:28:36.592342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.549 qpair failed and we were unable to recover it. 00:25:21.549 [2024-07-15 23:28:36.592623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.592674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.592951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.592979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.593252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.593274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.593545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.593572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.593856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.593883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.594154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.594181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.594449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.594472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.594766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.594793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.595071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.595098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.595368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.595395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.595616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.595638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.595929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.595957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.596248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.596297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.596573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.596600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.596880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.596903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.597164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.597191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.597472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.597518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.597757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.597785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.597982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.598004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.598229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.598256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.598537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.598584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.598856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.598884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.599114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.599136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.599416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.599443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.599629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.599656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.599938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.599966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.600241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.600263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.600543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.600571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.600814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.600842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.601088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.601116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.601340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.601362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.601640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.601667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.601942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.601970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.602243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.602270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.602538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.602560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.602842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.602870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.603154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.603181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.603463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.603490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.603777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.603800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.604086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.604113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.550 qpair failed and we were unable to recover it. 00:25:21.550 [2024-07-15 23:28:36.604395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.550 [2024-07-15 23:28:36.604446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.604729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.604764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.605046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.605068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.605335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.605362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.605653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.605925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.605949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.606179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.606202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.606479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.606506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.606708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.606735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.606975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.607002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.607225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.607248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.607539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.607565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.607767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.607795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.608065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.608092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.608364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.608390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.608629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.608867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.608894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.609140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.609167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.609432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.609455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.609744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.609772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.610009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.610036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.610260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.610512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.610534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.610827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.610854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.611076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.611103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.611377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.611404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.611585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.611607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.611854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.611882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.612169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.612218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.612498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.612526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.612796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.613039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.613067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.613300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.613349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.613629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.613656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.613902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.613926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.614172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.614200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.614479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.614756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.614784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.615063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.615085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.615360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.615665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.615713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.551 [2024-07-15 23:28:36.615995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.551 [2024-07-15 23:28:36.616037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.551 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.616310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.616349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.616633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.616660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.616945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.616973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.617247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.617275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.617465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.617487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.617728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.617762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.618042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.618069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.618260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.618287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.618505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.618527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.618823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.618850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.619090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.619117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.619388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.619415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.619627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.619649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.619921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.619949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.620192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.620241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.620444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.620471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.620752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.620775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.621002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.621029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.621305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.621352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.621618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.621646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.621928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.621950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.622200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.622228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.622506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.622556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.622826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.622854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.623125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.623147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.623386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.623413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.623691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.623746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.624025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.624052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.624265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.624287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.624481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.624508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.624789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.624817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.625097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.625124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.625338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.625360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.625585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.625613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.625880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.625907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.626185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.626212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.626435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.626458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.626707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.626734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.627032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.627059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.552 [2024-07-15 23:28:36.627243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.552 [2024-07-15 23:28:36.627270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.552 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.627511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.627533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.627786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.627814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.628050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.628077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.628310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.628337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.628607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.628630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.628868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.628896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.629185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.629234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.629505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.629532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.629808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.629831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.630120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.630148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.630427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.630476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.630690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.630718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.630992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.631028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.631305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.631332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.631602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.631650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.631927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.631955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.632222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.632245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.632497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.632525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.632766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.632794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.633065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.633092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.633360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.633382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.633660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.633687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.633957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.633985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.634255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.634552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.634574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.634834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.634861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.635086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.635113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.635390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.635422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.635702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.635724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.636011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.636038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.636279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.636328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.636599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.636626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.636866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.636889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.637131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.637158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.553 [2024-07-15 23:28:36.637435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.553 [2024-07-15 23:28:36.637483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.553 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.637755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.637783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.638070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.638347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.638374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.638649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.638701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.638979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.639002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.639276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.639299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.639508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.639535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.639775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.639803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.640021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.640048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.640279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.640302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.640585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.640612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.640846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.640874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.641087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.641114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.641326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.641348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.641627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.641654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.641938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.641966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.642235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.642262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.642530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.642552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.642813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.642840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.643123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.643178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.643417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.643445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.643682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.643704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.643997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.644025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.644273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.644323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.644540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.644567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.644836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.644859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.645149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.645176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.645433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.645482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.645771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.645799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.646071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.646093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.646363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.646629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.646678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.646950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.646974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.647209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.647231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.647513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.647540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.647769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.647797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.648008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.648035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.648283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.648305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.648549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.648577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.648864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.554 [2024-07-15 23:28:36.648892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.554 qpair failed and we were unable to recover it. 00:25:21.554 [2024-07-15 23:28:36.649120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.649147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.649343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.649366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.649648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.649675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.649951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.649979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.650213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.650240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.650504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.650527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.650817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.650849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.651071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.651098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.651369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.651396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.651667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.651690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.651915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.651943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.652192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.652239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.652429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.652457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.652692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.652714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.653003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.653031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.653242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.653292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.653573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.653601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.653898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.653922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.654214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.654241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.654533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.654582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.654849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.654877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.655163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.655186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.655460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.655486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.655763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.655791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.656060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.656087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.656309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.656331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.656608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.656635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.656884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.656911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.657184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.657212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.657427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.657449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.657696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.657723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.657982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.658010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.658285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.658312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.658594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.658616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.658892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.658920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.659155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.659202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.659420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.659447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.659681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.659703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.659956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.659984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.660274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.660321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.555 [2024-07-15 23:28:36.660600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.555 [2024-07-15 23:28:36.660627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.555 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.660904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.660926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.661156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.661183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.661430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.661480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.661711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.661744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.661968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.661991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.662276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.662304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.662556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.662606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.662885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.662913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.663204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.663226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.663517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.663544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.663789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.663817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.664093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.664120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.664389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.664411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.664708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.664735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.664977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.665005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.665221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.665248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.665527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.665549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.665787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.665815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.666045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.666072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.666294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.666321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.666553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.666860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.666888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.667118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.667145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.667376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.667402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.667686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.667708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.667995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.668023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.668233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.668280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.668560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.668587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.668821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.668844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.669067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.669094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.669375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.669425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.669689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.669716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.669950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.670255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.670287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.670531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.670581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.670854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.670882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.671110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.671132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.671409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.671436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.671714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.671770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.672049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.672076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.556 qpair failed and we were unable to recover it. 00:25:21.556 [2024-07-15 23:28:36.672349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.556 [2024-07-15 23:28:36.672371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.672591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.672618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.672825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.672853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.673121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.673148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.673415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.673437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.673686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.673713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.673954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.673982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.674208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.674236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.674412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.674434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.674672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.674699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.674899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.674927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.675192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.675219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.675496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.675518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.675760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.675789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.676067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.676094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.676301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.676328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.676515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.676537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.676761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.676789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.677017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.677044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.677317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.677344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.677614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.677640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.677946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.677971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.678233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.678289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.678540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.678567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.678807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.678831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.679064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.679091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.679366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.679417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.679688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.679715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.680007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.680031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.680285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.680312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.680603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.680649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.680923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.680951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.681119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.681354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.681381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.681694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.681756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.682025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.682052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.682274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.682296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.682572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.682599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.682881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.557 [2024-07-15 23:28:36.682909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.557 qpair failed and we were unable to recover it. 00:25:21.557 [2024-07-15 23:28:36.683189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.683216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.683439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.683462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.683656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.683684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.683961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.683989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.684230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.684257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.684474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.684497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.684785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.684813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.685082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.685393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.685420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.685670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.685718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.685993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.686033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.686312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.686359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.686629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.686657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.686945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.686969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.687260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.687287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.687522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.687835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.687859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.688101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.688123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.688423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.688450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.688693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.688750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.689038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.689065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.689286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.689308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.689510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.689538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.689770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.689798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.690046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.690073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.690335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.690357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.690598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.690625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.690837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.690864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.691104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.691132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.691395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.691416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.691694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.691721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.691944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.691972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.692200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.692227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.692500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.692522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.692725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.692761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.693044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.693071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.693357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.693385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.558 qpair failed and we were unable to recover it. 00:25:21.558 [2024-07-15 23:28:36.693661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.558 [2024-07-15 23:28:36.693684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.693932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.693960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.694193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.694240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.694515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.694542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.694727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.694771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.694985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.695012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.695276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.695326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.695597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.695624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.695807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.695830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.696065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.696092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.696374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.696420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.696705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.696732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.697011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.697052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.697333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.697360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.697646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.697696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.697984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.698008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.698288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.698310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.698547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.698848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.698876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.699114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.699141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.699329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.699352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.699647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.699944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.699972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.700211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.700238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.700461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.700483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.700775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.700803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.701043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.701070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.701340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.701367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.701578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.701600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.701856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.701884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.702188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.702237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.702469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.702496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.702732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.702775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.703041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.703068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.703309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.703358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.703594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.703621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.703848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.703871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.704096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.704123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.704384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.704433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.704700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.704732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.705033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.705056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.559 qpair failed and we were unable to recover it. 00:25:21.559 [2024-07-15 23:28:36.705338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.559 [2024-07-15 23:28:36.705366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.705612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.705661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.705894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.705922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.706151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.706175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.706455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.706483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.706760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.706788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.707020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.707047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.707313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.707337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.707583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.707610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.707882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.707909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.708179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.708207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.708452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.708476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.708753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.709054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.709082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.709351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.709379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.709608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.709632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.709843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.709870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.710151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.710201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.710435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.710462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.710746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.710785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.711086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.711113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.711368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.711396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.711669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.711696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.711946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.711972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.712238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.712266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.712490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.712545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.712812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.712841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.713125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.713149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.713431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.713459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.713759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.713822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.714114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.714141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.714382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.714406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.714684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.714712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.714965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.714992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.715226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.715253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.715497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.715522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.715752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.715779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.716019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.716046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.716275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.716575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.716872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.716900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.717138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.560 [2024-07-15 23:28:36.717190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.560 qpair failed and we were unable to recover it. 00:25:21.560 [2024-07-15 23:28:36.717424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.717451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.717703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.717726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.717987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.718014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.718259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.718309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.718586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.718614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.718847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.718872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.719135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.719162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.719360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.719411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.719681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.719708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.719911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.719936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.720210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.720237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.720512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.720560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.720754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.720959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.720984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.721230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.721257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.721537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.721586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.721872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.721900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.722174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.722197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.722433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.722460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.722747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.722774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.723047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.723074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.723307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.723332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.723561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.723587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.723863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.723890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.724169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.724196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.724419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.724443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.724710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.724750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.725034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.725061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.725330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.725357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.725608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.725904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.725932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.726154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.726207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.726478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.726505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.726783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.726808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.727103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.727130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.727372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.727425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.727697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.727724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.728012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.728050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.728346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.561 [2024-07-15 23:28:36.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.561 qpair failed and we were unable to recover it. 00:25:21.561 [2024-07-15 23:28:36.728656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.728706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.728981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.729005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.729277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.729302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.729576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.729604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.729876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.729904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.730169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.730196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.730447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.730471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.730735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.730769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.731000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.731027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.731257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.731283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.731562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.731587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.731901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.731928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.732209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.732240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.732422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.732449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.732693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.732717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.732984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.733012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.733293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.733348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.733532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.733559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.733831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.733856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.734123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.734150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.734381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.734431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.734720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.734753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.735028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.735052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.735238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.735266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.735509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.735561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.735832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.735859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.736150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.736174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.736455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.736482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.736760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.736787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.737053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.737080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.737350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.737389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.737671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.737698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.737976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.738004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.738269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.738296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.738533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.562 [2024-07-15 23:28:36.738558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.562 qpair failed and we were unable to recover it. 00:25:21.562 [2024-07-15 23:28:36.738826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.738854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.739129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.739156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.739373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.739400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.739639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.739664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.739877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.739909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.740185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.740240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.740520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.740548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.740777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.740802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.741087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.741114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.741403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.741461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.741692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.741719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.742005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.742030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.742274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.742301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.742544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.742592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.742869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.742897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.743139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.743162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.743383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.743411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.743654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.743705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.743938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.744178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.744202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.744421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.744448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.744726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.744760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.744991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.745018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.745289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.745313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.745536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.745563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.745795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.745823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.746092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.746119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.746391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.746415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.746651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.746678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.746943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.746971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.747201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.747228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.747472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.747767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.747794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.748027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.748054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.748296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.748323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.748599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.748624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.748908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.748936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.749140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.749201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.749471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.749498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.749766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.749805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.750076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.750103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.563 qpair failed and we were unable to recover it. 00:25:21.563 [2024-07-15 23:28:36.750352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.563 [2024-07-15 23:28:36.750400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.750680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.750707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.751003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.751337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.751364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.751576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.751626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.751868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.751896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.752168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.752192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.752465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.752492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.752777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.752805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.753070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.753097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.753313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.753338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.753612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.753639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.753910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.753938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.754179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.754206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.754470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.754494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.754719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.754753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.754990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.755017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.755286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.755313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.755565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.755589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.755866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.755893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.756141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.756191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.756418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.756445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.756712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.756749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.757004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.757031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.757243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.757292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.757532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.757558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.757821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.757846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.758077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.758105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.758310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.758358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.758632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.758659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.758932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.758956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.759170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.759201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.759420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.759467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.759713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.759748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.760030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.760054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.760347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.760374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.760662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.760711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.760988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.761027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.761289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.761313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.761559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.761586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.761857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.564 [2024-07-15 23:28:36.761886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.564 qpair failed and we were unable to recover it. 00:25:21.564 [2024-07-15 23:28:36.762166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.762193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.762412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.762436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.762710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.762744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.762978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.763005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.763255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.763282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.763537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.763561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.763830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.763858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.764084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.764111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.764309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.764336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.764580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.764605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.764874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.764902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.765089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.765137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.765376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.765404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.765685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.765709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.765974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.766002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.766219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.766269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.766549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.766576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.766843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.766873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.767123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.767150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.767432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.767481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.767711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.767744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.767982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.768006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.768266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.768293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.768526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.768580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.768824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.768852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.769124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.769147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.769378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.769405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.769690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.769751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.769990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.770017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.770287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.770310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.770574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.770601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.770845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.770873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.771106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.771133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.771404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.771426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.771706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.771734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.772005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.772032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.772304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.772331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.772604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.772627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.772905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.772933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.773222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.773269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.773532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.565 [2024-07-15 23:28:36.773559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.565 qpair failed and we were unable to recover it. 00:25:21.565 [2024-07-15 23:28:36.773854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.774127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.774154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.774361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.774410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.774680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.774712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.774956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.774980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.775236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.775263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.775555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.775605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.775847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.775875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.776105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.776127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.776403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.776431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.776705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.776732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.777021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.777048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.777243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.777265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.777535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.777563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.777798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.777825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.778098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.778125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.778349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.778371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.778616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.778643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.778925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.778952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.779178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.779205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.779439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.779461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.779752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.779780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.780069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.780096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.780369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.780396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.780633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.780655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.780861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.780889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.781117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.781166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.781436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.781463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.781752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.781776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.782033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.782060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.782313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.782361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.782638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.782665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.782941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.782964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.783242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.783270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.783556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.783604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.566 qpair failed and we were unable to recover it. 00:25:21.566 [2024-07-15 23:28:36.783890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.566 [2024-07-15 23:28:36.783918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.784125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.784147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.784435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.784463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.784724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.784759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.785001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.785027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.785302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.785325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.785567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.785595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.785810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.785838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.786107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.786134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.786419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.786632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.786659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.786894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.786922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.787125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.787152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.787343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.787365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.787597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.787624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.787794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.787822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.788092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.788385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.788407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.788682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.788709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.788968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.788996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.789267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.789294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.789510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.789533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.789773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.789811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.790068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.790095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.790365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.790393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.790669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.790691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.790895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.790923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.791199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.791247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.791522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.791550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.791824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.791847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.792122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.792149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.792431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.792480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.792761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.792800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.793046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.793068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.793310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.793337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.793624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.793671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.793905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.793937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.794168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.794190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.794466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.794494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.794714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.794747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.795022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.795049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.567 qpair failed and we were unable to recover it. 00:25:21.567 [2024-07-15 23:28:36.795319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.567 [2024-07-15 23:28:36.795341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.795577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.795625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.795858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.795886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.796168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.796195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.796465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.796487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.796761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.796789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.797022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.797050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.797321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.797348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.797569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.797818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.797846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.798151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.798199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.798438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.798465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.798761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.798784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.799077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.799104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.799348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.799396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.799591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.799618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.799860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.799883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.800175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.800202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.800484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.800534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.800802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.800830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.801072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.801094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.801351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.801629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.801961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.801984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.802216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.802238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.802479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.802506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.802775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.802803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.803079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.803106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.803312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.803334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.803618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.803645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.803916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.804223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.804250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.804520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.804542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.804796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.804824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.805063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.805090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.805342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.805369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.805656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.805679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.805909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.805937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.806182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.806231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.806505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.806532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.806752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.806774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.568 qpair failed and we were unable to recover it. 00:25:21.568 [2024-07-15 23:28:36.807061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.568 [2024-07-15 23:28:36.807088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.807378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.807425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.807707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.807734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.808036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.808058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.808337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.808364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.808616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.808666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.808896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.808923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.809192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.809214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.809469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.809496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.809773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.809801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.810018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.810045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.810310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.810332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.810609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.810636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.810810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.810837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.811962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.811985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.812166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.812194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.812388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.812437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.812603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.812630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.812790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.812813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.813007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.813034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.813222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.813270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.813463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.813490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.813750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.813972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.814006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.814176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.814231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.814428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.814455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.814651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.814845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.814869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.815110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.815171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.815375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.815402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.815571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.815779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.815818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.816002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.816204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.816231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.816438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.816460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.816671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.816698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.569 qpair failed and we were unable to recover it. 00:25:21.569 [2024-07-15 23:28:36.816869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.569 [2024-07-15 23:28:36.816897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.817065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.817093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.817252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.817274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.817463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.817490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.817676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.817703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.817894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.817918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.818113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.818135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.818350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.818536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.818567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.818761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.818799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.818936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.818959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.819148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.819176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.819340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.819396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.819591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.819618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.819784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.819812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.819945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.820175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.820235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.820415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.820442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.820618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.820641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.820823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.820850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.820973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.821012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.821202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.821230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.821393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.821425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.821632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.821659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.821813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.821840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.821995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.822023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.822217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.822239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.822421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.822448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.822621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.822648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.822797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.822825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.823942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.823976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.824093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.824121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.824271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.570 [2024-07-15 23:28:36.824298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.570 qpair failed and we were unable to recover it. 00:25:21.570 [2024-07-15 23:28:36.824481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.824503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.824661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.824690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.824836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.824862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.824971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.824996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.825157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.825195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.825341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.825365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.825543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.825570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.825720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.825755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.825897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.825920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.826063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.826100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.826291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.826319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.826497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.826525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.826684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.826707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.826857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.826900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.827077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.827104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.827270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.827297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.827467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.827499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.827712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.827748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.827909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.827937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.828968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.828996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.829150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.829199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.829353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.829380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.829622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.829649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.829870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.829895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.830086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.830112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.830374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.830419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.830655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.830682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.830817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.830842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.830985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.831164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.831361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.831560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.831753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.831937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.571 [2024-07-15 23:28:36.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.571 qpair failed and we were unable to recover it. 00:25:21.571 [2024-07-15 23:28:36.832084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.832111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.832318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.832340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.832582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.832609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.832828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.832856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.832976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.833132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.833361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.833515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.833753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.833945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.833970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.834172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.834199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.834470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.834518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.834759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.834797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.834939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.834962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.835109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.835133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.835356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.835403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.835657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.835685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.835861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.835886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.836067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.836094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.836270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.836319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.836470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.836497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.836761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.836980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.837030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.837230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.837278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.837519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.837547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.837818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.837844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.837961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.837989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.838284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.838332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.838570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.838597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.838825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.838850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.838995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.839036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.839202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.839251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.839408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.839435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.839581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.839619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.839833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.839861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.840054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.840081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.840252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.840279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.840476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.840500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.840692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.572 [2024-07-15 23:28:36.840850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.572 [2024-07-15 23:28:36.840878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.572 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.841082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.841289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.841323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.841525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.841560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.841726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.841765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.841910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.841943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.842152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.842179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.842371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.842396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.842576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.842604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.842809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.842838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.843082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.843107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.843327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.843354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.843609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.843640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.843842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.844041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.844070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.844261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.844288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.844449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.844477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.844717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.844751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.844880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.844905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.845040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.845064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.845311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.845338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.845604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.845631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.845845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.845870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.842 qpair failed and we were unable to recover it. 00:25:21.842 [2024-07-15 23:28:36.846023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.842 [2024-07-15 23:28:36.846051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.846261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.846288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.846483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.846729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.846763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.846928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.846955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.847152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.847180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.847427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.847454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.847697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.847721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.847864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.847891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.848063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.848094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.848323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.848350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.848637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.848662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.848853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.848881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.849091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.849118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.849283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.849310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.849562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.849586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.849804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.849829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.850017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.850059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.850279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.850311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.850484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.850509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.850729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.850763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.850947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.850972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.851254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.851282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.851495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.851520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.851665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.851692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.851835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.851863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.851986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.852013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.852206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.852234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.852442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.852469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.852749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.852777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.852958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.852986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.853183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.853207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.853426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.853454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.853755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.853788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.853923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.853950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.854134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.854164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.854424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.854455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.854685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.854712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.854907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.854932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.855080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.855104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.843 qpair failed and we were unable to recover it. 00:25:21.843 [2024-07-15 23:28:36.855259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.843 [2024-07-15 23:28:36.855286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.855524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.855551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.855816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.855844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.856018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.856043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.856256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.856284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.856553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.856580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.856812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.856840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.857010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.857034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.857308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.857336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.857573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.857616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.857843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.857871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.858036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.858060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.858301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.858328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.858583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.858867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.858895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.859020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.859058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.859251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.859278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.859467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.859515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.859663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.859690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.859852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.859877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.860107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.860134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.860358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.860405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.860661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.860688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.860839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.860865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.861078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.861105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.861399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.861445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.861658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.861864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.861889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.862921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.862946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.863215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.863242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.863419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.863442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.863599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.863627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.863800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.863828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.864008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.864035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.864275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.864299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.844 [2024-07-15 23:28:36.864531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.844 [2024-07-15 23:28:36.864559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.844 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.864801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.864829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.865016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.865043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.865297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.865320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.865575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.865603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.865833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.865861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.866002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.866034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.866234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.866257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.866511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.866538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.866751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.866779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.866952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.866979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.867106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.867143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.867264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.867288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.867478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.867515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.867803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.867831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.867989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.868029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.868243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.868270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.868445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.868491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.868701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.868728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.868938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.868962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.869196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.869224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.869462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.869507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.869732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.869788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.869964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.869989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.870256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.870283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.870481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.870534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.870807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.870835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.871054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.871077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.871346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.871373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.871649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.871693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.871956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.871981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.872248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.872271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.872519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.872546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.872819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.872851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.873120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.873147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.873431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.873454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.873695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.873722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.873970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.873997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.874272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.874300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.874510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.874533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.874743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.845 [2024-07-15 23:28:36.874771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.845 qpair failed and we were unable to recover it. 00:25:21.845 [2024-07-15 23:28:36.874952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.874979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.875151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.875178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.875346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.875369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.875539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.875566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.875794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.875822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.875971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.876004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.876189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.876221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.876445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.876472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.876733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.876767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.876957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.876984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.877180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.877203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.877368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.877394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.877558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.877586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.877851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.877879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.878097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.878120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.878297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.878324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.878506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.878547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.878747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.879050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.879074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.879338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.879370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.879554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.879598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.879838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.879866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.880115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.880137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.880422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.880449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.880705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.880731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.880891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.880915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.881036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.881074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.881270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.881297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.881520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.881547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.881687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.881715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.881943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.881968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.882205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.882232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.882485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.882512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.846 [2024-07-15 23:28:36.882656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.846 [2024-07-15 23:28:36.882683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.846 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.882828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.882853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.883104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.883132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.883399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.883425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.883651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.883678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.883954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.883979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.884240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.884269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.884505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.884531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.884721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.884753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.884994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.885031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.885278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.885304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.885445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.885472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.885672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.885698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.885899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.885923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.886197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.886420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.886446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.886639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.886666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.886904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.886929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.887109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.887134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.887396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.887421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.887683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.887708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.887872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.887895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.888157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.888182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.888350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.888375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.888598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.888623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.888822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.888846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.889095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.889121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.889392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.889422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.889654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.889680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.889910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.889935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.890194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.890219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.890367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.890392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.890610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.890635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.890907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.890932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.891106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.891130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.891369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.891393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.891660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.891685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.891868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.891893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.892070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.892109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.892287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.892311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.892568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.892593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.847 [2024-07-15 23:28:36.892883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.847 [2024-07-15 23:28:36.892908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.847 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.893125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.893150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.893338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.893362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.893629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.893653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.893878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.893903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.894062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.894101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.894331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.894354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.894481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.894785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.894810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.895072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.895095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.895289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.895312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.895577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.895600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.895814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.895839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.895972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.896012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.896263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.896287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.896550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.896573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.896847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.896872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.897112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.897136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.897344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.897367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.897584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.897608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.897857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.897881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.898026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.898057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.898319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.898342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.898531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.898557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.898807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.898832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.899101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.899124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.899386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.899409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.899564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.899587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.899828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.899852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.900047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.900071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.900292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.900315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.900597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.900619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.900913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.900938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.901199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.901222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.901435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.901458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.901749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.901773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.901949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.902224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.902246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.902529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.902552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.902782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.902806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.903001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.848 [2024-07-15 23:28:36.903043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.848 qpair failed and we were unable to recover it. 00:25:21.848 [2024-07-15 23:28:36.903232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.903261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.903485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.903508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.903668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.903691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.903865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.903890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.904139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.904162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.904460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.904483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.904730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.904761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.904997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.905035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.905243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.905266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.905526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.905549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.905812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.905836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.906073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.906096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.906339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.906362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.906593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.906616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.906892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.906916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.907159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.907183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.907328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.907351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.907531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.907553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.907768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.907791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.908025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.908049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.908328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.908351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.908531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.908554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.908745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.908769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.909008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.909047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.909318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.909340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.909592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.909614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.909858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.909881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.910119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.910142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.910350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.910372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.910627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.910650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.910946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.910970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.911165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.911188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.911427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.911450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.911684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.911707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.911942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.911966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.912148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.912171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.912345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.912368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.912623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.912646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.912901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.912924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.913202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.913225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.913407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.913431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.849 qpair failed and we were unable to recover it. 00:25:21.849 [2024-07-15 23:28:36.913677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.849 [2024-07-15 23:28:36.913700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.913996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.914021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.914227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.914250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.914458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.914481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.914626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.914649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.914801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.914825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.915049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.915072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.915340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.915363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.915587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.915610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.915878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.915903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.916073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.916096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.916352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.916375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.916585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.916608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.916893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.916917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.917209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.917232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.917421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.917444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.917681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.917704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.917935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.917958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.918163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.918186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.918409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.918432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.918630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.918653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.918860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.918884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.919078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.919107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.919334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.919357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.919634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.919657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.919922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.919945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.920203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.920231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.920477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.920500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.920701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.920724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.920965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.920989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.921223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.921246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.921382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.921405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.921641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.921664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.921837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.921861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.922007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.922045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.922254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.922277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.922502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.922525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.922750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.922774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.922977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.923001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.923214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.923237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.923533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.850 [2024-07-15 23:28:36.923556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.850 qpair failed and we were unable to recover it. 00:25:21.850 [2024-07-15 23:28:36.923789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.923813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.924046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.924069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.924357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.924380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.924554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.924577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.924814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.924837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.925059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.925082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.925375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.925398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.925634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.925657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.925878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.925901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.926043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.926066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.926213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.926251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.926415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.926438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.926634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.926670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.926936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.926960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.927223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.927246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.927519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.927542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.927797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.927821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.928001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.928039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.928171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.928194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.928335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.928359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.928589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.928612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.928846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.928871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.929100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.929349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.929372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.929657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.929679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.929893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.929917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.930140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.930163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.930368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.930392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.930631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.930654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.930831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.930856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.931068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.931091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.931370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.931393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.931657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.931680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.931876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.931901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.932072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.932111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.932304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.851 [2024-07-15 23:28:36.932327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.851 qpair failed and we were unable to recover it. 00:25:21.851 [2024-07-15 23:28:36.932530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.932553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.932722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.932769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.932952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.932976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.933157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.933181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.933357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.933380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.933634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.933851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.933876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.934035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.934059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.934212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.934249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.934425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.934449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.934604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.934627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.934837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.934861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.935065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.935485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.935653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.935863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.935996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.936034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.936227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.936250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.936447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.936470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.936640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.936663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.936800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.936824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.937023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.937047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.937260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.937426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.937464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.937633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.937656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.937812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.937851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.938018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.938042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.938286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.938309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.938574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.938597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.938842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.938867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.939056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.939094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.939283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.939306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.939518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.939541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.939683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.939706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.939886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.939911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.940080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.940321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.940344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.940485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.940507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.940689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.940712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.940859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.852 [2024-07-15 23:28:36.940884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.852 qpair failed and we were unable to recover it. 00:25:21.852 [2024-07-15 23:28:36.941036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.941074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.941221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.941244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.941419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.941442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.941644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.941671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.941841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.941866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.942040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.942064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.942270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.942294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.942472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.942495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.942814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.942853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.943024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.943061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.943238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.943261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.943442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.943464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.943621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.943644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.943790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.943830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.944976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.945199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.945222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.945433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.945626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.945649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.945823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.945847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.945978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.946004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.946199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.946222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.946420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.946443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.946617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.946640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.946852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.946877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.947048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.947072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 [2024-07-15 23:28:36.947217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.947244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2446246 Killed "${NVMF_APP[@]}" "$@" 00:25:21.853 [2024-07-15 23:28:36.947429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.947457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:21.853 [2024-07-15 23:28:36.947679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.853 [2024-07-15 23:28:36.947703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.853 qpair failed and we were unable to recover it. 00:25:21.853 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:21.853 [2024-07-15 23:28:36.947874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.947899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.854 [2024-07-15 23:28:36.948064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.948088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.854 [2024-07-15 23:28:36.948310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.948334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.948616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.948797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.948824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.948943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.948967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.949182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.949206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.949411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.949435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.949609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.949636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.949823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.949849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.950005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.950043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.950224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.950248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.950448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.950472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.950643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.950666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.950844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.950869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.951956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.951981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.952179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.952203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.952381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.952405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.952582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.952605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2446719 00:25:21.854 [2024-07-15 23:28:36.952786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.952817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2446719 00:25:21.854 [2024-07-15 23:28:36.952949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.952974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 [2024-07-15 23:28:36.953109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.953133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2446719 ']' 00:25:21.854 [2024-07-15 23:28:36.953317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.953341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.854 [2024-07-15 23:28:36.953507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.854 [2024-07-15 23:28:36.953549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 wit 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.854 h addr=10.0.0.2, port=4420 00:25:21.854 qpair failed and we were unable to recover it. 00:25:21.854 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.855 [2024-07-15 23:28:36.953724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.953771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.855 [2024-07-15 23:28:36.953895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.953921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 23:28:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.855 [2024-07-15 23:28:36.954074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.954099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.954268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.954291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.954464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.954503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.954662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.954684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.954835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.954860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.954998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.955023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.955172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.955212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.955378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.955404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.955583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.955606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.955819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.955844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.955981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.956158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.956368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.956593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.956767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.956958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.956984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.957127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.957312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.957517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.957697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.957855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.957988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.958014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.958163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.958190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.958329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.958355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.958505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.958545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.855 qpair failed and we were unable to recover it. 00:25:21.855 [2024-07-15 23:28:36.958711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.855 [2024-07-15 23:28:36.958742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.958883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.958910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.960290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.960322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.960507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.960535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.960666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.960693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.960826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.960853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.961596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.961627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.961820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.961969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.961995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.962960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.962986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.963860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.963891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.964040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.964067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.964186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.964212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.964402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.964428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.965092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.965136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.965301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.965328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.965473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.965514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.965679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.965706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.965881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.965908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.966040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.966066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.966186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.966212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.966358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.856 [2024-07-15 23:28:36.966386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.856 qpair failed and we were unable to recover it. 00:25:21.856 [2024-07-15 23:28:36.966505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.966531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.966698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.966724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.966870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.966901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.967923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.967949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.968905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.969917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.969943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.970867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.970983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.971008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.971157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.971182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.971326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.971356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.857 qpair failed and we were unable to recover it. 00:25:21.857 [2024-07-15 23:28:36.971463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.857 [2024-07-15 23:28:36.971489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.971635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.971661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.971792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.971819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.971962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.971988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.972931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.972957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.973107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.973133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.973278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.973304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.973471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.973500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.973671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.973697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.973857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.973884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.974941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.974966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.975966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.975995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.976133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.976158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.976286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.976311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.976457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.976482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.976617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.976642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.858 qpair failed and we were unable to recover it. 00:25:21.858 [2024-07-15 23:28:36.976764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.858 [2024-07-15 23:28:36.976790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.976905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.976929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.977961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.977985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.978104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.978129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.978356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.978381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.978517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.978542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.978679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.978704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.978874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.978899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.979871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.979988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.980891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.980916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.859 qpair failed and we were unable to recover it. 00:25:21.859 [2024-07-15 23:28:36.981797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.859 [2024-07-15 23:28:36.981823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.981938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.981963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.982952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.982977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.983891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.983916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.984896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.984921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.985989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.986014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.986187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.986373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.986398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.986535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.986560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.860 [2024-07-15 23:28:36.986671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.860 [2024-07-15 23:28:36.986696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.860 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.986825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.986854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.986978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.987956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.987981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.988957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.988982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.989888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.989912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.990018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.990043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.990225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.990250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.990387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.861 [2024-07-15 23:28:36.990412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.861 qpair failed and we were unable to recover it. 00:25:21.861 [2024-07-15 23:28:36.990546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.990570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.990731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.990763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.990904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.990929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.991871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.991985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.992992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.993960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.993985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.994926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.994951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.995116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.995141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.995277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.995302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.995469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.995495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.862 [2024-07-15 23:28:36.995668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.862 [2024-07-15 23:28:36.995693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.862 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.995809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.995834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.995948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.995972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.996939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.996964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.997899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.997924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.998940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.998965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.999160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.999328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.999522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.999694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:36.999869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:36.999989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:37.000015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:37.000149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:37.000173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:37.000281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:37.000306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:37.000473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.863 [2024-07-15 23:28:37.000498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.863 qpair failed and we were unable to recover it. 00:25:21.863 [2024-07-15 23:28:37.000637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.000662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.000778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.000804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.000940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.000965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.001891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.001915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.002856] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:25:21.864 [2024-07-15 23:28:37.002890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.002934] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.864 [2024-07-15 23:28:37.003007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.003937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.003962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.004920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.004945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.005058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.864 [2024-07-15 23:28:37.005084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.864 qpair failed and we were unable to recover it. 00:25:21.864 [2024-07-15 23:28:37.005218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.005243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.005380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.005405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.005555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.005595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.005715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.005753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.005863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.005889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.006963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.006988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.007865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.007976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.008922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.008948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.009881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.865 qpair failed and we were unable to recover it. 00:25:21.865 [2024-07-15 23:28:37.009994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.865 [2024-07-15 23:28:37.010019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.010908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.010934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.011865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.011890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.012950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.012975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.013929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.013954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.866 qpair failed and we were unable to recover it. 00:25:21.866 [2024-07-15 23:28:37.014820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.866 [2024-07-15 23:28:37.014846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.014954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.014979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.015830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.015979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.016933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.016958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.017908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.017934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.867 qpair failed and we were unable to recover it. 00:25:21.867 [2024-07-15 23:28:37.018857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.867 [2024-07-15 23:28:37.018883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.018981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.019871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.019896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.020937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.020962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.021158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.021183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.021334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.021359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.021512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.021537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.021692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.021717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.021843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.021868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.022940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.022965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.023936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.023961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.868 [2024-07-15 23:28:37.024111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.868 [2024-07-15 23:28:37.024148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.868 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.024325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.024348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.024498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.024522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.024671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.024695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.024834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.024860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.024971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.024997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.025181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.025204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.025358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.025383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.025571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.025595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.025717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.025764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.025878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.025903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.026042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.026066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.026217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.026241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.026382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.026406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.027317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.027359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.027524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.027550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.027713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.027763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.027877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.027902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.028016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.028058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.028303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.028327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.028593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.028617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.028777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.028818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.029893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.029918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.030048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.030072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.030241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.030265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.030424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.030448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.869 qpair failed and we were unable to recover it. 00:25:21.869 [2024-07-15 23:28:37.030607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.869 [2024-07-15 23:28:37.030631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.030750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.030776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.030892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.030917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.031035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.031063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.031305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.031329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.031500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.031524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.031682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.031705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.031870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.031895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.032944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.032969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.033125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.033149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.033334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.033358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.033471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.033495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.033626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.033650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.033775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.033800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.034916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.034941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.035117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.035278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.035502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.035673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.035852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.035976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.036005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.036147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.036186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.870 [2024-07-15 23:28:37.036309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.870 [2024-07-15 23:28:37.036333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.870 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.036508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.036548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.036688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.036713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.036858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.036884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.037918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.037944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.038842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.038992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.039968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.039994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.040953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.040978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.041206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.041230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.041348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.871 [2024-07-15 23:28:37.041371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.871 qpair failed and we were unable to recover it. 00:25:21.871 [2024-07-15 23:28:37.041497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.041521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.041663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.041687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.041861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.041888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.042851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.042877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.872 [2024-07-15 23:28:37.042990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.043017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.043158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.043181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.043413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.043438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.043593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.043617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.043763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.043789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.044888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.044997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.045924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.045949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.046101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.046125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.046280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.046305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.046420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.046445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.046553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.872 [2024-07-15 23:28:37.046577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.872 qpair failed and we were unable to recover it. 00:25:21.872 [2024-07-15 23:28:37.046689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.046714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.046854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.046879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.046982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.047187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.047350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.047576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.047703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.047896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.047920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.048921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.048947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.049911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.050050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.050075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.050225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.050263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.050462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.050486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.050650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.050673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.873 [2024-07-15 23:28:37.050827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.873 [2024-07-15 23:28:37.050852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.873 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.051560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.051588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.051805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.051831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.051980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.052926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.052950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.053096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.053119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.053289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.053312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.053455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.053479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.053660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.053685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.054378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.054404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.054608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.054632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.054788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.054928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.054953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.055937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.055962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.056116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.056139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.056373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.056397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.056584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.056606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.056734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.056768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.056884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.056908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.057028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.057051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.057200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.057224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.874 [2024-07-15 23:28:37.057359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.874 [2024-07-15 23:28:37.057382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.874 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.057515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.057538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.057778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.057808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.057923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.057947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.058076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.058100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.058821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.058850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.058966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.058992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.059200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.059224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.059377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.059400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.059595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.059618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.059749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.059775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.059896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.059921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.060969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.060994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.061948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.061972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.062165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.062187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.062345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.062368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.062573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.062596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.062775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.062800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.062917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.062945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.063108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.875 [2024-07-15 23:28:37.063131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.875 qpair failed and we were unable to recover it. 00:25:21.875 [2024-07-15 23:28:37.063289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.063313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.063534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.063706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.063751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.063881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.063905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.064540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.064566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.064749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.064776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.064918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.064944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.065065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.065088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.065342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.065366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.065520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.065543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.065684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.065708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.065848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.065872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.066949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.066973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.067140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.067180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.067306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.067329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.067503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.067528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.067750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.067776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.067919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.067944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.068096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.068120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.068280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.068304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.068450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.068474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.068748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.068774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.068910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.068936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.069066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.876 [2024-07-15 23:28:37.069090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.876 qpair failed and we were unable to recover it. 00:25:21.876 [2024-07-15 23:28:37.069259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.069282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.069403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.069426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.069551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.069574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.069754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.069781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.069926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.069951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.070968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.070994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.071105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.071129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.071279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.071318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.071475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.071499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.071662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.071684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.071870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.071897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.072893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.072917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.073834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.073859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.074038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.074211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.074432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.074668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.877 [2024-07-15 23:28:37.074847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.877 qpair failed and we were unable to recover it. 00:25:21.877 [2024-07-15 23:28:37.074983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.075228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.075434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.075566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.075753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.075913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.075938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.076905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.076930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.077093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.077296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.077467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.077681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.077853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.077979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.078148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.078296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.078495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.078688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.078866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.078891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.878 [2024-07-15 23:28:37.079004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.878 [2024-07-15 23:28:37.079029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.878 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.079170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.079346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.079385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.079555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.079578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.079664] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.879 [2024-07-15 23:28:37.079710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.079756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.079863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.079887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.080862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.080886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.081036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.081059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.081209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.081232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.081413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.081437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.081653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.081676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.081869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.081895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.082903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.082928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.083923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.083947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.084065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.084089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.084235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.084258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.879 [2024-07-15 23:28:37.084464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.879 [2024-07-15 23:28:37.084502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.879 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.084625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.084648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.084807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.084832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.084941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.084973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.085215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.085239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.085373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.085396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.085551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.085574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.085714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.085760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.085898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.085923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.086944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.086969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.087114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.087151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.087307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.087330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.087531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.087555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.087710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.087756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.087908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.087933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.088900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.088925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.089155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.089178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.089336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.089358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.089552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.089574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.089716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.089746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.089902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.880 [2024-07-15 23:28:37.089931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.880 qpair failed and we were unable to recover it. 00:25:21.880 [2024-07-15 23:28:37.090063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.090293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.090487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.090627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.090780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.090966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.090990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.091108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.091132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.091308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.091333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.091508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.091556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.091766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.091791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.091934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.091959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.092135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.092307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.092554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.092716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.092879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.092996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.093200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.093421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.093593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.093753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.093917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.093942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.094115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.094320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.094475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.094693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.094870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.094989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.095013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.095282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.095306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.095476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.095500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.095666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.095690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.881 qpair failed and we were unable to recover it. 00:25:21.881 [2024-07-15 23:28:37.095868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.881 [2024-07-15 23:28:37.095892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.096888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.096911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.097069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.097108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.097315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.097339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.097493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.097517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.097697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.097720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.097858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.097883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.098017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.098042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.098302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.098325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.098487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.098512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.098711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.098896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.098919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.099883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.099907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.100952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.100975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.101240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.101390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.101528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.101664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.882 [2024-07-15 23:28:37.101828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.882 qpair failed and we were unable to recover it. 00:25:21.882 [2024-07-15 23:28:37.101967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.101991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.102170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.102192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.102371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.102395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.102516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.102554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.102692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.102903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.102928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.103055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.103279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.103310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.103483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.103506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.103718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.103763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.103902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.103927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.104053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.104076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.104357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.104381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.104524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.104547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.104691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.104714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.105009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.105047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.105210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.105234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.105457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.105480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.105655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.105678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.105888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.105912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.106108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.106288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.106486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.106668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.106872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.106998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.107147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.107322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.107594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.107768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.107939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.108089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.108113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.108312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.108488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.108511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.883 [2024-07-15 23:28:37.108665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.883 [2024-07-15 23:28:37.108688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.883 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.108909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.108944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.109934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.109958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.110131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.110155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.110323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.110346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.110581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.110604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.110748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.110786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.110890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.110914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.111150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.111173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.111339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.111368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.111539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.111562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.111722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.111777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.111966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.111989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.112118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.112140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.112288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.112311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.112450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.112474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.112633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.112656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.112838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.112865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.113020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.113043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.113197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.113219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.113399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.113422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.113554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.113592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.884 [2024-07-15 23:28:37.113711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.884 [2024-07-15 23:28:37.113733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.884 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.113916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.113939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.114087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.114126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.114380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.114403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.114553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.114586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.114699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.114721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.114887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.114910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.115088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.115111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.115343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.115365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.115512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.115535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.115681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.115718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.115902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.115927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.116098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.116120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.116283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.116560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.116583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.116771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.116992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.117197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.117338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.117593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.117756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.117910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.117934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.118147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.118171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.118329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.118352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.118536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.118559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.118741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.118780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.118917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.118939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.119101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.119128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.119292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.119315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.119547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.119751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.119774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.119925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.119948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.120205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.120228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.120415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.120438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.120594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.120616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.885 [2024-07-15 23:28:37.120841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.885 [2024-07-15 23:28:37.120864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.885 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.120982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.121209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.121233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.121360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.121383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.121546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.121584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.121760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.121784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.121997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.122157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.122375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.122566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.122768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.122954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.122978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.123217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.123240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.123380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.123402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.123650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.123806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.123830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.123999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.124210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.124232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.124384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.124407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.124589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.124612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.124780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.124820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.125041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.125065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.125249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.125272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.125433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.125470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.125674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.125697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.125830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.125867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.126947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.126970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.127141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.127165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.127314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.127469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.127492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.127691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.127715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.127875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.127900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.886 [2024-07-15 23:28:37.128069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.886 [2024-07-15 23:28:37.128092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.886 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.128259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.128282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.128517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.128541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.128700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.128723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.128857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.128896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.129101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.129248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.129408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.129591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.129853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.129989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.130182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.130333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.130587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.130818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.130947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.130970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.131117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.131140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.131284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.131306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.131502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.131544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.131706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.131767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.131912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.131935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.132964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.132987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.133140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.133180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.133331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.133354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.133548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.133572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.133770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.133794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.133944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.133967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.134112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.134149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.134334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.134357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.134516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.134539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.134707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.134730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.887 qpair failed and we were unable to recover it. 00:25:21.887 [2024-07-15 23:28:37.134900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.887 [2024-07-15 23:28:37.134923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.135973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.135996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.136225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.136247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.136417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.136440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.136608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.136635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.136862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.136886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.137024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.137047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.137240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.137263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.137448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.137471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.137697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.137748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.137916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.137939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.138155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.138178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.138304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.138326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.138534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.138557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.138686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.138709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.138866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.138906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.139036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.139060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.139243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.139266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.139467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.139490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.139662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.139684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.139854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.139879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.140071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.140095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.140252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.140275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.140444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.140466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.140609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.140632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.140890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.140914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.141109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.141132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.141269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.141291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.141446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.141469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.141630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.141653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.141846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.141871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.142025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.142049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.142229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.888 [2024-07-15 23:28:37.142253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.888 qpair failed and we were unable to recover it. 00:25:21.888 [2024-07-15 23:28:37.142415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.142440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.142575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.142599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.142741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.142766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.142885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.142910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.143066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.143103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.143297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.143329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.143532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.143556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.143731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.143762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.143935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.143959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.144097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.144122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.144266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.144289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.144494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.144516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.144672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.144709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.144896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.144921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.145053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.145092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:21.889 [2024-07-15 23:28:37.145318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.889 [2024-07-15 23:28:37.145349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:21.889 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.145491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.145515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.145645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.145671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.145838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.145863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.146004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.146043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.146289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.146314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.146452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.146476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.146659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.146683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.146848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.146872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.147888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.147912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.148061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.148086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.148297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.148518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.148543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.148668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.148691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.148906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.148941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.149118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.149156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.149347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.149370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.149553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.149577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.149784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.149823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.149944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.149972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.150102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.150140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.150284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.150309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.150442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.150481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.150644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.150667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.160 qpair failed and we were unable to recover it. 00:25:22.160 [2024-07-15 23:28:37.150861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.160 [2024-07-15 23:28:37.150886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.151950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.151972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.152146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.152169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.152337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.152360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.152591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.152615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.152755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.152794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.152925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.152948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.153183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.153206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.153368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.153391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.153585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.153623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.153766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.153931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.153954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.154135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.154165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.154297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.154319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.154478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.154500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.154702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.154725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.154882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.154906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.155023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.155050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.155207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.155230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.155386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.155424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.155545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.155582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.155767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.155807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.156128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.156152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.156332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.156355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.156559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.156752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.156791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.156930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.156953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.157113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.157151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.157300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.157334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.157502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.157545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.157705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.157750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.157910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.157933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.158049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.158073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.158238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.158261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.158408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.158430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.158691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.158714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.158872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.158896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.159953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.159977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.160108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.160132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.160299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.160321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.160485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.160508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.160690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.160728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.160905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.160929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.161113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.161137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.161345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.161368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.161526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.161548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.161734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.161763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.161887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.161911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.162089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.162125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.162276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.162298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.162533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.162568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.162690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.162713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.162935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.162959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.163121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.163144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.163330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.163353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.163468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.163507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.163682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.163915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.163940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.164061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.164084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.164238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.164276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.164430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.164452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.164603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.164626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.164777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.164817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.165039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.165063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.165220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.165244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.165363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.165386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.165603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.165636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.165813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.165837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.166001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.161 [2024-07-15 23:28:37.166034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.161 qpair failed and we were unable to recover it. 00:25:22.161 [2024-07-15 23:28:37.166167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.166190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.166349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.166387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.166630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.166653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.166807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.166832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.166965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.166989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.167121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.167159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.167373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.167403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.167589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.167612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.167786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.167810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.168901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.168925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.169106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.169129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.169278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.169311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.169459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.169497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.169678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.169700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.169836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.169862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.170115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.170295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.170457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.170655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.170842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.170983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.171007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.171324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.171349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.171499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.171522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.171782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.171807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.171978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.172001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.172168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.172191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.172368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.172391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.172568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.172592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.172810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.172833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.172982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.173005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.173195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.173218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.173361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.173385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.173617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.173640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.173813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.173842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.174944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.174968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.175115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.175152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.175345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.175370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.175476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.175499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.175662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.175684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.175852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.175892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.176092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.176124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.176277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.176301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.176445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.176469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.176673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.176696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.176868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.176893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.177043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.177067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.177248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.177271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.177426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.177460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.177634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.177656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.177801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.177827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.178087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.178110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.178306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.178329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.178499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.178522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.178726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.178769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.178939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.178962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.179090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.179118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.179289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.179312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.179535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.179556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.179719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.179762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.179942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.162 [2024-07-15 23:28:37.179964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.162 qpair failed and we were unable to recover it. 00:25:22.162 [2024-07-15 23:28:37.180149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.180172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.180335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.180358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.180505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.180528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.180710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.180733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.180951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.180976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.181121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.181158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.181332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.181354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.181478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.181515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.181778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.181804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.181992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.182175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.182352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.182496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.182684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.182914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.182938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.183081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.183117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.183327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.183350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.183489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.183511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.183713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.183765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.183922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.183945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.184095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.184117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.184302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.184325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.184509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.184532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.184694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.184717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.184955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.184980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.185133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.185155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.185348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.185372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.185485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.185523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.185799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.185825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.185979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.186197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.186363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.186526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.186771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.186955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.186978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.187109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.187134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.187353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.187377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.187513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.187550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.187710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.187733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.187890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.187914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.188074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.188112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.188290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.188312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.188520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.188747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.188770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.188951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.188976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.189140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.189174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.189338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.189367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.189570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.189594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.189788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.189813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.189950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.189974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.190090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.190113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.190288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.190524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.190547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.190696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.190718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.190946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.190971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.191198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.191221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.191378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.191401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.191576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.191599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.191779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.191804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.191907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.191945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.192064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.192087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.192296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.192334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.192502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.192525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.192704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.192731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.192947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.192971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.193167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.193191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.193355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.193388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.193551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.193575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.193715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.193745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.193891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.193929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.194085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.194109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.194300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.194323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.194513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.194535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.194687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.194709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.194895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.194919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.195032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.195070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.195193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.195216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.195444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.195467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.195602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.163 [2024-07-15 23:28:37.195625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.163 qpair failed and we were unable to recover it. 00:25:22.163 [2024-07-15 23:28:37.195781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.195805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.195949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.195973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.196162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.196185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.196316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.196339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.196533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.196555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.196703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.196726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.196915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.196940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.197078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.197102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.197263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.197287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.197509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.197539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.197675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.197699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.197899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.197928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.198140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.198316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.198520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.198839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.198986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.199010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.199204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.199240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.199382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.199404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.199619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.199643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.199793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.199832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.200906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.200929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.201063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.201088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.201305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.201328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.201491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.201513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.201657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.201680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.201836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.201876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.202092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.202116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.202262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.202284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.202502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.202524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.202685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.202707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.202946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.202971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.203112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.203135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.203410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.203434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.203594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.203618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.203825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.203850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.203981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.204188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.204352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.204522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.204766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.204950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.204973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.205096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.205119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.205303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.205326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.205473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.205495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.205676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.205699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.205887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.205913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.206913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.206942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.207120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.207146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.207312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.207342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.207514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.207539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.207689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.207715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.207860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.207887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.208020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.208045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.208219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.208245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.208465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.208492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.208711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.208758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.208956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.208981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.209178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.209345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.209504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.209837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.209967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.210001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.210116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.210141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.210268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.210293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.210445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.164 [2024-07-15 23:28:37.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.164 qpair failed and we were unable to recover it. 00:25:22.164 [2024-07-15 23:28:37.210632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.210658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.210862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.210900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.211054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.211087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.211233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.211259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.211414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.211440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.211649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.211680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.211818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.211844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.212042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.212196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.212492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.212660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.212855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.212980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.213169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.213309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.213505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.213734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.213925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.213951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.214069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.214098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.214263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.214287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.214517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.214541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.214697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.214721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.214882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.214908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.215137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.215165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.215317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.215353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.215511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.215535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.215721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.215758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.215876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.215902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.216092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.216122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.216296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.216321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.216523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.216559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.216713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.216748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.216931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.216961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.217141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.217143] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.165 [2024-07-15 23:28:37.217167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 [2024-07-15 23:28:37.217177] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.217193] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.165 [2024-07-15 23:28:37.217218] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.165 [2024-07-15 23:28:37.217228] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.165 [2024-07-15 23:28:37.217377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.217405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 [2024-07-15 23:28:37.217344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.217397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:22.165 [2024-07-15 23:28:37.217528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.217446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:22.165 [2024-07-15 23:28:37.217553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 [2024-07-15 23:28:37.217450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.217714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.217748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.217892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.217919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.218958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.218983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.219967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.219993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.220153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.220183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.220301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.220326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.220472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.220508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.220695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.220725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.220870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.220895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.221051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.221085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.165 [2024-07-15 23:28:37.221259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.165 [2024-07-15 23:28:37.221285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.165 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.221414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.221440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.221576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.221600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.221753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.221779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.221917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.221942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.222961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.222986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.223137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.223162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.223278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.223311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.223466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.223490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.223636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.223662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.223804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.223830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.224920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.224947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.225111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.225136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.225318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.225351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.225508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.225534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.225766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.225793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.225969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.225999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.226886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.226923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.227071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.227098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.227244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.227269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.227480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.227506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.227644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.227669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.227792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.227817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.228036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.228062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.228232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.228268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.228411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.228436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.228590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.228846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.228873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.229942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.229966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.230137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.230164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.230302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.230332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.230496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.230524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.230666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.230692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.230849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.230875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.166 [2024-07-15 23:28:37.231001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.166 [2024-07-15 23:28:37.231025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.166 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.231247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.231274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.231417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.231578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.231603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.231720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.231756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.231899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.231924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.232882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.232909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.233077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.233305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.233497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.233640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.233810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.233974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.234140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.234389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.234578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.234761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.234900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.234927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.235885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.235911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.236059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.236085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.236221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.236246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.236436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.236462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.236609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.236638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.236839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.236866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.237905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.237930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.238068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.238096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.238224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.167 [2024-07-15 23:28:37.238250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.167 qpair failed and we were unable to recover it. 00:25:22.167 [2024-07-15 23:28:37.238423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.238449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.238566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.238595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.238837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.238867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.238985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.239959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.239990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.240921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.240947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.241926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.241952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.242915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.242940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.243131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.243266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.243463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.243601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.168 [2024-07-15 23:28:37.243800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.168 qpair failed and we were unable to recover it. 00:25:22.168 [2024-07-15 23:28:37.243967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.243993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.244124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.244149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.244299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.244324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.244441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.244468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.244778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.244807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.244977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.245169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.245355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.245532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.245703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.245872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.245903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.246926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.246955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.247959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.247985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.248098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.248123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.248323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.248350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.248487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.248517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.248656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.248680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.248846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.248873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.249845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.249870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.250869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.250895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.251003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.169 [2024-07-15 23:28:37.251030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.169 qpair failed and we were unable to recover it. 00:25:22.169 [2024-07-15 23:28:37.251184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.251210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.251351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.251376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.251542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.251567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.251718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.251753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.251876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.251900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.252082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.252270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.252474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.252670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.252844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.252992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.253141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.253314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.253482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.253669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.253842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.253868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.254852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.254878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.255882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.255908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.256885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.256911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.257834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.170 [2024-07-15 23:28:37.257860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.170 qpair failed and we were unable to recover it. 00:25:22.170 [2024-07-15 23:28:37.258011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.258346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.258544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.258718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.258876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.258901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.259898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.259924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.260854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.260881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.261914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.261941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.262115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.262140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.262276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.262302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.262473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.262500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.262684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.262711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.262882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.262908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.263972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.263999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.264138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.264162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.264323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.264348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.264486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.264512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.264658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.171 [2024-07-15 23:28:37.264684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.171 qpair failed and we were unable to recover it. 00:25:22.171 [2024-07-15 23:28:37.264799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.264830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.264980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.265923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.265948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.266907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.266934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.267099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.267123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.267271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.267296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.267433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.267464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.267649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.267678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.267810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.267836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.268865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.268893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.269871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.269896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.270854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.270880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.271044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.271074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.271243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.271268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.271411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.172 [2024-07-15 23:28:37.271437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.172 qpair failed and we were unable to recover it. 00:25:22.172 [2024-07-15 23:28:37.271602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.271782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.271809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.271929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.271954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.272934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.272960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.273099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.273125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.273294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.273321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.273469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.273494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.273642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.273668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.273831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.273863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.274813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.274840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.275896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.275923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.276858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.276885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.277035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.277061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.277200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.277225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.277325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.277350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.173 qpair failed and we were unable to recover it. 00:25:22.173 [2024-07-15 23:28:37.277522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.173 [2024-07-15 23:28:37.277547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.277712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.277745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.277870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.277895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.278903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.278929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.279936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.280133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.280157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.280328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.280355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.280525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.280553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.280667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.280691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.280852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.280880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.281933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.281960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.282154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.282280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.282470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.282643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.282844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.282981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.283972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.283998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.284137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.284162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.284325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.284350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.284491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.284517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.174 [2024-07-15 23:28:37.284675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.174 [2024-07-15 23:28:37.284701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.174 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.284820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.284851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.284997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.285158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.285315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.285455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.285650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.285850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.285876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.286931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.286956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.287930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.287954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.288935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.288966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.289137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.289315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.289472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.289659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.289988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.290148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.290306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.290492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.290684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.290860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.290885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.291055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.291080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.291215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.291239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.291359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.291384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.291544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.175 [2024-07-15 23:28:37.291568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.175 qpair failed and we were unable to recover it. 00:25:22.175 [2024-07-15 23:28:37.291701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.291725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.291894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.291920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.292869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.292894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.293844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.293870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.294853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.294879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.295889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.295914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.296872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.296897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.297846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.297871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.298014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.298038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.298168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.298193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.298357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.176 [2024-07-15 23:28:37.298381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.176 qpair failed and we were unable to recover it. 00:25:22.176 [2024-07-15 23:28:37.298478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.298503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.298669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.298695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.298865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.298891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.298998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.299915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.299941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.300890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.300915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.301872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.301985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.302176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.302333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.302503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.302695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.302872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.302897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.303914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.303940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.304077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.304102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.304230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.177 [2024-07-15 23:28:37.304254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.177 qpair failed and we were unable to recover it. 00:25:22.177 [2024-07-15 23:28:37.304371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.304400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.304567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.304593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.304704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.304749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.304926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.304951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.305933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.305958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.306123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.306147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.306335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.306360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.306492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.306517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.306662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.306877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.306903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.307948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.307973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.308910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.308936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.309047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.309075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.309238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.309263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.309433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.309457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.309639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.309664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.309837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.309880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.310016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.310040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.310258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.310282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.310407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.310432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.310705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.310730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.310853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.310878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.311000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.311026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.311160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.311185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.311425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.311450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.311561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.178 [2024-07-15 23:28:37.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.178 qpair failed and we were unable to recover it. 00:25:22.178 [2024-07-15 23:28:37.311711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.311754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.311989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.312159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.312300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.312500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.312751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.312900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.312925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.313100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.313306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.313474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.313665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.313820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.313979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.314214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.314356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.314552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.314690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.314869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.314894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.315941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.315966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.316136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.316303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.316465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.316659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.316881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.316995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.317135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.317317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.317491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.317721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.317890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.317914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.318051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.318076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.318242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.318267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.318429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.318454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.318588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.318612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.318843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.179 [2024-07-15 23:28:37.318868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.179 qpair failed and we were unable to recover it. 00:25:22.179 [2024-07-15 23:28:37.319002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.319212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.319344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.319496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.319727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.319896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.319921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.320074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.320099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.320297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.320321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.320458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.320483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.320655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.320679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.320797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.320822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.321877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.321902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.322077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.322102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.322268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.322293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.322435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.322459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.322625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.322650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.322832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.322858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.323970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.323995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.324109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.324134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.324335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.324360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.324466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.324656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.324681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.324865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.324890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.325905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.325929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.326107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.326132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.326261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.326290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.326407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.326431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.180 [2024-07-15 23:28:37.326567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.180 [2024-07-15 23:28:37.326591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.180 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.326797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.326823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.326992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.327120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.327261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.327550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.327681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.327889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.327914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.328048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.328072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.328238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.328263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.328460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.328485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.328692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.328735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.328902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.328929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.329906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.329932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.330936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.330961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.331102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.331132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.331253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.331287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.331479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.331504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.331687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.331712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.331919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.331944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.332951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.332976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.333117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.333142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.333263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.333296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.333446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.181 [2024-07-15 23:28:37.333471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.181 qpair failed and we were unable to recover it. 00:25:22.181 [2024-07-15 23:28:37.333621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.333646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.333749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.333939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.333965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.334927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.334952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.335057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.335090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.335285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.335310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.335443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.335468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.335604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.335629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.335807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.336934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.336960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.337942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.337966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.338121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.338285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.338691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.338868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.338997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.339022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.339182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.339207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.182 [2024-07-15 23:28:37.339373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:22.182 [2024-07-15 23:28:37.339399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.339536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.182 [2024-07-15 23:28:37.339563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.182 [2024-07-15 23:28:37.339701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.339726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.182 [2024-07-15 23:28:37.339891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.339917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.340053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.340077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.340244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.340269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.340411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.340437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.340549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.340573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.182 qpair failed and we were unable to recover it. 00:25:22.182 [2024-07-15 23:28:37.340697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.182 [2024-07-15 23:28:37.340725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.340873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.340899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.341085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.341276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.341437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.341602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.341830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.341987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.342129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.342317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.342504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.342700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.342846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.342870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.343947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.343971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.344943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.344968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.345848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.345873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.346930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.346959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.347957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.347982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.348146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.348171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.183 [2024-07-15 23:28:37.348273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.183 [2024-07-15 23:28:37.348298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.183 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.348436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.348461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.348602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.348627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.348771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.348797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.348914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.348939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.349872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.349897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.350918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.350942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.351943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.351968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.352915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.352939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.353971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.353996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.354924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.354948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.355078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.355103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.355239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.355264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.355396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.355425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.355562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.355586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.184 qpair failed and we were unable to recover it. 00:25:22.184 [2024-07-15 23:28:37.355698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.184 [2024-07-15 23:28:37.355722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.355870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.355895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.355996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.356134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.356321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.356448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.356607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.356766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.185 [2024-07-15 23:28:37.356900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.356925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.185 [2024-07-15 23:28:37.357082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.357108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.185 [2024-07-15 23:28:37.357219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.357244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 [2024-07-15 23:28:37.357420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.357447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.357492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b93c80 (9): Bad file descriptor 00:25:22.185 [2024-07-15 23:28:37.357674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.357713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.357893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.357920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.358897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.358922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.359891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.359917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.360941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.360966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.361101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.361126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.361263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.361288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.361453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.361479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.361588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.361618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.185 qpair failed and we were unable to recover it. 00:25:22.185 [2024-07-15 23:28:37.361754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.185 [2024-07-15 23:28:37.361779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.361888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.361913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.362910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.362935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.363918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.363943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.364915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.364940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.365941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.365966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.366113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.366268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.366456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.366619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.366847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.366994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.367183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.367348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.367504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.367696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.367831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.367857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.368899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.368924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.369101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.369126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.369266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.369291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.369429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.369455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.369622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.369647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.186 [2024-07-15 23:28:37.369795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.186 [2024-07-15 23:28:37.369821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.186 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.369976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.370115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.370305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.370467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.370629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.370873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.370899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.371855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.371992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.372873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.372979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.373940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.373965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.374130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.374155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.374373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.374398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.374568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.374593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.374769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.374795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.374930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.374955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.375151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.375176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.375331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.375356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.375534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.375559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.375697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.375721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.375865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.375890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.376893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.377102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.377263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.377463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.377606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.187 [2024-07-15 23:28:37.377781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.187 qpair failed and we were unable to recover it. 00:25:22.187 [2024-07-15 23:28:37.377926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.377951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.378172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.378197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.378305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.378330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.378507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.378532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.378696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.378720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.378871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.378898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.379074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.379100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.379286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.379311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.379478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.379503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.379771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.379797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.379931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.379956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.380178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.380358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.380535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.380693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.380877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.380992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.381952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.381977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.382087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.382111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 Malloc0 00:25:22.188 [2024-07-15 23:28:37.382283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.382308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.382443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.382468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.188 [2024-07-15 23:28:37.382643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.382668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.382831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:22.188 [2024-07-15 23:28:37.382859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.188 [2024-07-15 23:28:37.382999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.188 [2024-07-15 23:28:37.383160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.383362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.383570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.383712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.383892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a10000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.384066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.384115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.384241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.384266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.384439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.384464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.384628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.384659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.384868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.384894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.385924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.385949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.188 [2024-07-15 23:28:37.385946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.386091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.386119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.386257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.188 [2024-07-15 23:28:37.386282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.188 qpair failed and we were unable to recover it. 00:25:22.188 [2024-07-15 23:28:37.386440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.386465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.386605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.386630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.386777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.386803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.386917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.386946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7a20000b90 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.387932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.387957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.388950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.388973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.389969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.389993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.390205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.390230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.390369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.390394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.390604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.390629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.390792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.390817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.390995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.391185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.391380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.391550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.391711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.391891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.391916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.392940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.392965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.393096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.393121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.393349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.393374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.393541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.393566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.393761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.393786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.393908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.393933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 [2024-07-15 23:28:37.394041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.189 [2024-07-15 23:28:37.394066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.189 qpair failed and we were unable to recover it. 00:25:22.189 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.189 [2024-07-15 23:28:37.394210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.394235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.190 [2024-07-15 23:28:37.394435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.394460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 [2024-07-15 23:28:37.394622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.394647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.394774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.394799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.394920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.394945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.395115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.395268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.395456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.395719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.395869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.395986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.396200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.396368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.396530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.396686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.396889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.396914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.397020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.397045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.397218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.397242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.397345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.397369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.397541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.397566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.397800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.398001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.398025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.398242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.398267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.398448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.398473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.398690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.398714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.398865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.398894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.399870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.399899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.400070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.400094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.400314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.400338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.400519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.400544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.400713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.400743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.400880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.400905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.401066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.401090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.401229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.401254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.401430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.401454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.401673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.401697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.401876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.401907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.402046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.402071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.190 [2024-07-15 23:28:37.402290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.402314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.190 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.190 [2024-07-15 23:28:37.402548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.402575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.402750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.402776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.402992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.403017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.403183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.403208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.403435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.403467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.403586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.403610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.403786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.190 [2024-07-15 23:28:37.403818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.190 qpair failed and we were unable to recover it. 00:25:22.190 [2024-07-15 23:28:37.403974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.403998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.404152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.404176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.404315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.404339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.404512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.404537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.404723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.404755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.404914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.404938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.405080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.405105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.405245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.405270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.405415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.405439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.405605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.405629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.405830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.405855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.406924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.406949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.407122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.407147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.407324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.407348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.407523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.407548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.407765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.407789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.407934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.407958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.408132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.408157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.408405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.408429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.408568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.408593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.408723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.408760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.408980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.409163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.409362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.409501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.409679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.409869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.409903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.410047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.191 [2024-07-15 23:28:37.410240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.191 [2024-07-15 23:28:37.410403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.191 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 [2024-07-15 23:28:37.410600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.410806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.410972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.410997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.411160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.411185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.411341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.411375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.411642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.411667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.411842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.411868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.412076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.412100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.412269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.412293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.412475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.412500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.412687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.412712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.412828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.412853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.413032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.413057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.413197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.413221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.413416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.413441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.191 [2024-07-15 23:28:37.413632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.191 [2024-07-15 23:28:37.413657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.191 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.413866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.192 [2024-07-15 23:28:37.413891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.414060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.192 [2024-07-15 23:28:37.414099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9f1e0 with addr=10.0.0.2, port=4420 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.414209] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.192 [2024-07-15 23:28:37.416657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.192 [2024-07-15 23:28:37.416824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.192 [2024-07-15 23:28:37.416859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.192 [2024-07-15 23:28:37.416875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.192 [2024-07-15 23:28:37.416888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.192 [2024-07-15 23:28:37.416922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.192 23:28:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2446303 00:25:22.192 [2024-07-15 23:28:37.426576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.192 [2024-07-15 23:28:37.426688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.192 [2024-07-15 23:28:37.426715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.192 [2024-07-15 23:28:37.426730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.192 [2024-07-15 23:28:37.426749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.192 [2024-07-15 23:28:37.426780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.436549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.192 [2024-07-15 23:28:37.436703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.192 [2024-07-15 23:28:37.436729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.192 [2024-07-15 23:28:37.436755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.192 [2024-07-15 23:28:37.436769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.192 [2024-07-15 23:28:37.436798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.446534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.192 [2024-07-15 23:28:37.446665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.192 [2024-07-15 23:28:37.446695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.192 [2024-07-15 23:28:37.446711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.192 [2024-07-15 23:28:37.446724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.192 [2024-07-15 23:28:37.446765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.192 [2024-07-15 23:28:37.456639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.192 [2024-07-15 23:28:37.456806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.192 [2024-07-15 23:28:37.456847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.192 [2024-07-15 23:28:37.456871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.192 [2024-07-15 23:28:37.456888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.192 [2024-07-15 23:28:37.456927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.192 qpair failed and we were unable to recover it. 00:25:22.449 [2024-07-15 23:28:37.466650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.449 [2024-07-15 23:28:37.466792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.449 [2024-07-15 23:28:37.466820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.449 [2024-07-15 23:28:37.466835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.449 [2024-07-15 23:28:37.466848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.449 [2024-07-15 23:28:37.466878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.449 qpair failed and we were unable to recover it. 00:25:22.449 [2024-07-15 23:28:37.476621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.449 [2024-07-15 23:28:37.476746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.449 [2024-07-15 23:28:37.476773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.449 [2024-07-15 23:28:37.476789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.449 [2024-07-15 23:28:37.476802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.449 [2024-07-15 23:28:37.476832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.449 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.486616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.486775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.486807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.486826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.486840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.486876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.496627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.496761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.496788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.496802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.496816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.496844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.506653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.506774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.506800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.506814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.506827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.506857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.516679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.516801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.516827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.516841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.516854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.516883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.526749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.526858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.526883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.526898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.526911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.526939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.536818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.536970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.537002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.537017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.537030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.537067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.546800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.546906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.546932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.546946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.546959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.546997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.556848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.556952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.556977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.556991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.557005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.557033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.566829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.566941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.566967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.566982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.566995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.567023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.576886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.576992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.577017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.577032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.577060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.577089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.586864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.586965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.586990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.587005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.587018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.587046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.596951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.597075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.597101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.597115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.597129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.597157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.606948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.450 [2024-07-15 23:28:37.607051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.450 [2024-07-15 23:28:37.607076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.450 [2024-07-15 23:28:37.607091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.450 [2024-07-15 23:28:37.607104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.450 [2024-07-15 23:28:37.607132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.450 qpair failed and we were unable to recover it. 00:25:22.450 [2024-07-15 23:28:37.617005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.617112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.617137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.617152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.617164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.617193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.627023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.627168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.627193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.627208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.627221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.627259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.637083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.637204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.637230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.637245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.637258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.637287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.647102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.647227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.647252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.647266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.647279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.647308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.657102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.657221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.657246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.657261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.657274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.657302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.667130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.667257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.667282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.667297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.667315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.667352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.677172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.677289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.677314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.677329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.677342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.677371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.687266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.687394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.687419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.687434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.687447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.687475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.697298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.697417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.697442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.697456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.697469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.697497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.707302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.707448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.707473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.707487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.707500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.707539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.717266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.717391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.717417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.717432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.717445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.717473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.727323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.727451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.727476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.727491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.727504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.727533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.737296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.737420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.737445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.737459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.737472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.451 [2024-07-15 23:28:37.737501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.451 qpair failed and we were unable to recover it. 00:25:22.451 [2024-07-15 23:28:37.747319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.451 [2024-07-15 23:28:37.747420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.451 [2024-07-15 23:28:37.747445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.451 [2024-07-15 23:28:37.747460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.451 [2024-07-15 23:28:37.747473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.452 [2024-07-15 23:28:37.747501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.452 qpair failed and we were unable to recover it. 00:25:22.452 [2024-07-15 23:28:37.757447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.452 [2024-07-15 23:28:37.757610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.452 [2024-07-15 23:28:37.757635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.452 [2024-07-15 23:28:37.757655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.452 [2024-07-15 23:28:37.757669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.452 [2024-07-15 23:28:37.757697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.452 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.767409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.767531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.767556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.767570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.767583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.767612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.777429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.777547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.777573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.777588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.777601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.777629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.787485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.787610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.787635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.787650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.787663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.787692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.797532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.797652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.797677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.797692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.797704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.797735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.807532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.807693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.807718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.807733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.807752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.807781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.817554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.817697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.817722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.817744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.817759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.817787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.827601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.827725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.827757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.827772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.827786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.827814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.837591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.837768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.837794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.837808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.837821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.837850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.847708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.847883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.847908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.847928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.847942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.847971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.857700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.857871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.857896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.857911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.857924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.857952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.867710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.867847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.867872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.867887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.867900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.867929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.877788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.877893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.877918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.877932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.877945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.877974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.887798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.887907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.887932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.887947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.887960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.887989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.897800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.897908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.897933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.897947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.897960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.897989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.907866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.907983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.908008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.908023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.908036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.908065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.917827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.710 [2024-07-15 23:28:37.917942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.710 [2024-07-15 23:28:37.917967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.710 [2024-07-15 23:28:37.917982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.710 [2024-07-15 23:28:37.917995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.710 [2024-07-15 23:28:37.918024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.710 qpair failed and we were unable to recover it. 00:25:22.710 [2024-07-15 23:28:37.927914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.928033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.928058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.928073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.928086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.928121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.937929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.938037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.938062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.938082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.938096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.938126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.947981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.948112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.948137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.948152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.948164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.948193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.957981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.958137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.958162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.958177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.958188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.958217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.968049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.968214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.968239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.968254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.968267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.968295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.978076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.978200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.978226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.978241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.978254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.978282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.988032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.988136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.988161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.988177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.988190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.988218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:37.998073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:37.998211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:37.998236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:37.998251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:37.998264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:37.998293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:38.008124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:38.008252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:38.008276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:38.008290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:38.008302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:38.008330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.711 [2024-07-15 23:28:38.018127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.711 [2024-07-15 23:28:38.018254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.711 [2024-07-15 23:28:38.018279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.711 [2024-07-15 23:28:38.018294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.711 [2024-07-15 23:28:38.018307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.711 [2024-07-15 23:28:38.018335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.711 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.028199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.028335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.028365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.028380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.028394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.028422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.038224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.038358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.038382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.038397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.038410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.038438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.048251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.048424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.048449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.048464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.048477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.048505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.058244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.058369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.058395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.058409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.058422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.058451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.068274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.068397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.068423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.068437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.068450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.068478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.078250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.969 [2024-07-15 23:28:38.078384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.969 [2024-07-15 23:28:38.078422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.969 [2024-07-15 23:28:38.078437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.969 [2024-07-15 23:28:38.078450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.969 [2024-07-15 23:28:38.078479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.969 qpair failed and we were unable to recover it. 00:25:22.969 [2024-07-15 23:28:38.088321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.088474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.088500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.088515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.088527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.088555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.098346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.098470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.098496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.098511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.098524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.098552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.108474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.108601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.108627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.108641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.108655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.108683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.118389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.118527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.118559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.118575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.118588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.118617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.128427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.128550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.128575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.128590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.128603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.128632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.138549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.138668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.138693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.138709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.138722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.138758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.148522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.148650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.148675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.148690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.148703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.148732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.158530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.158658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.158684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.158699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.158712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.158753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.168551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.168696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.168722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.168744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.168760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.168789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.178510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.178631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.178657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.178672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.970 [2024-07-15 23:28:38.178685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.970 [2024-07-15 23:28:38.178714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.970 qpair failed and we were unable to recover it. 00:25:22.970 [2024-07-15 23:28:38.188557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.970 [2024-07-15 23:28:38.188688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.970 [2024-07-15 23:28:38.188713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.970 [2024-07-15 23:28:38.188728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.188748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.188778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.198700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.198827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.198853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.198868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.198881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.198909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.208637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.208812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.208842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.208858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.208871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.208900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.218640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.218768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.218794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.218809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.218822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.218850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.228891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.229016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.229045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.229060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.229073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.229101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.238774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.238921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.238947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.238962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.238975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.239004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.248819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.248935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.248960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.248975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.248988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.249022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.258821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.258933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.258958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.258973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.258986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.259014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.268823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.268982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.269007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.269021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.269035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.269063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:22.971 [2024-07-15 23:28:38.278837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:22.971 [2024-07-15 23:28:38.278947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:22.971 [2024-07-15 23:28:38.278973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:22.971 [2024-07-15 23:28:38.278988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:22.971 [2024-07-15 23:28:38.279001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:22.971 [2024-07-15 23:28:38.279030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:22.971 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.288928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.289051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.289076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.289091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.289104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.289132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.298919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.299062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.299092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.299107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.299120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.299149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.308964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.309073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.309098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.309113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.309126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.309154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.319001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.319109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.319133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.319148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.319161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.319189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.329042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.329167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.329191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.329206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.329219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.329248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.339053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.339178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.339203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.339218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.339236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.339265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.349076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.349197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.349222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.349237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.349250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.349278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.359117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.359276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.359301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.359315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.359329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.359357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.369095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.369221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.369246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.369261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.369274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.369302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.379111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.379230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.379256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.379271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.379285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.379313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.389161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.389289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.389315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.389329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.389343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.389371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.399186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.399357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.399383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.399398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.399411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.399441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.409223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.409355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.409382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.409396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.409410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.409438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.419270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.419394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.419419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.419434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.419447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.419476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.429302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.429428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.429453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.429468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.230 [2024-07-15 23:28:38.429486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.230 [2024-07-15 23:28:38.429516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.230 qpair failed and we were unable to recover it. 00:25:23.230 [2024-07-15 23:28:38.439367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.230 [2024-07-15 23:28:38.439488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.230 [2024-07-15 23:28:38.439515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.230 [2024-07-15 23:28:38.439530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.439543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.439572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.449404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.449574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.449600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.449615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.449628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.449657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.459341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.459469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.459494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.459509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.459523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.459551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.469373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.469500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.469525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.469541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.469554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.469582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.479383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.479513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.479539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.479555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.479568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.479596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.489518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.489645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.489671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.489686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.489699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.489727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.499480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.499607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.499633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.499648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.499661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.499689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.509512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.509639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.509664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.509679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.509692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.509729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.519513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.519633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.519658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.519672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.519690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.519720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.529581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.529708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.529732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.529755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.529767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.529797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.231 [2024-07-15 23:28:38.539595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.231 [2024-07-15 23:28:38.539721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.231 [2024-07-15 23:28:38.539755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.231 [2024-07-15 23:28:38.539771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.231 [2024-07-15 23:28:38.539784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.231 [2024-07-15 23:28:38.539812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.231 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.549665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.549794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.549820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.549835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.549848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.549877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.559701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.559838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.559864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.559878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.559891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.559920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.569669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.569820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.569845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.569860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.569873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.569901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.579698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.579827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.579853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.579867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.579880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.579909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.589757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.589871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.589896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.589910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.589923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.589951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.599783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.599889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.599914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.599929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.599942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.599970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.609806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.609958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.609983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.610004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.489 [2024-07-15 23:28:38.610018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.489 [2024-07-15 23:28:38.610046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.489 qpair failed and we were unable to recover it. 00:25:23.489 [2024-07-15 23:28:38.619811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.489 [2024-07-15 23:28:38.619966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.489 [2024-07-15 23:28:38.619991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.489 [2024-07-15 23:28:38.620006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.620019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.620048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.629897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.630048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.630073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.630088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.630101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.630131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.639885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.640047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.640072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.640087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.640100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.640128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.649929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.650038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.650063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.650078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.650091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.650120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.659939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.660047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.660072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.660087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.660100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.660128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.670019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.670158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.670183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.670199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.670212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.670241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.679971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.680083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.680108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.680122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.680135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.680164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.690074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.690234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.690260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.690275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.690288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.690317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.700015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.700164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.700189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.700210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.700225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.700253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.710061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.710181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.710206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.710220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.710234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.710262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.720085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.720209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.720234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.720248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.720261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.720290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.730153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.730289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.730315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.490 [2024-07-15 23:28:38.730329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.490 [2024-07-15 23:28:38.730342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.490 [2024-07-15 23:28:38.730370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.490 qpair failed and we were unable to recover it. 00:25:23.490 [2024-07-15 23:28:38.740188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.490 [2024-07-15 23:28:38.740309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.490 [2024-07-15 23:28:38.740335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.740349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.740362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.740393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.750203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.750339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.750364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.750379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.750391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.750419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.760235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.760356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.760381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.760396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.760409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.760437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.770239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.770392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.770417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.770431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.770445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.770473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.780271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.780446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.780471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.780486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.780499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.780528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.790327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.790490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.790520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.790536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.790549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.790578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.491 [2024-07-15 23:28:38.800298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.491 [2024-07-15 23:28:38.800431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.491 [2024-07-15 23:28:38.800456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.491 [2024-07-15 23:28:38.800471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.491 [2024-07-15 23:28:38.800483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.491 [2024-07-15 23:28:38.800512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.491 qpair failed and we were unable to recover it. 00:25:23.749 [2024-07-15 23:28:38.810356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.749 [2024-07-15 23:28:38.810506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.749 [2024-07-15 23:28:38.810532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.749 [2024-07-15 23:28:38.810547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.749 [2024-07-15 23:28:38.810560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.749 [2024-07-15 23:28:38.810588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.749 qpair failed and we were unable to recover it. 00:25:23.749 [2024-07-15 23:28:38.820340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.749 [2024-07-15 23:28:38.820458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.749 [2024-07-15 23:28:38.820483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.749 [2024-07-15 23:28:38.820497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.749 [2024-07-15 23:28:38.820510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.749 [2024-07-15 23:28:38.820540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.749 qpair failed and we were unable to recover it. 00:25:23.749 [2024-07-15 23:28:38.830388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.749 [2024-07-15 23:28:38.830503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.749 [2024-07-15 23:28:38.830528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.749 [2024-07-15 23:28:38.830543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.749 [2024-07-15 23:28:38.830556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.749 [2024-07-15 23:28:38.830584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.749 qpair failed and we were unable to recover it. 00:25:23.749 [2024-07-15 23:28:38.840403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.749 [2024-07-15 23:28:38.840528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.749 [2024-07-15 23:28:38.840553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.749 [2024-07-15 23:28:38.840567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.749 [2024-07-15 23:28:38.840580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.749 [2024-07-15 23:28:38.840608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.749 qpair failed and we were unable to recover it. 00:25:23.749 [2024-07-15 23:28:38.850476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.749 [2024-07-15 23:28:38.850606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.749 [2024-07-15 23:28:38.850632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.850646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.850659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.850687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.860508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.860629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.860654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.860668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.860681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.860709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.870523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.870651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.870677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.870691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.870704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.870732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.880546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.880672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.880704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.880719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.880732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.880770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.890600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.890723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.890755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.890771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.890784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.890812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.900612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.900748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.900774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.900789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.900802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.900830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.910650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.910823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.910848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.910863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.910876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.910905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.920690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.920822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.920848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.920863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.920876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.920910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.930681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.930831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.930856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.930871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.930885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.930913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.940779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.940909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.940934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.940949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.940962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.940991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.950790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.950936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.950961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.950976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.950989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.951018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.750 qpair failed and we were unable to recover it. 00:25:23.750 [2024-07-15 23:28:38.960810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.750 [2024-07-15 23:28:38.960964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.750 [2024-07-15 23:28:38.960989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.750 [2024-07-15 23:28:38.961004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.750 [2024-07-15 23:28:38.961017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.750 [2024-07-15 23:28:38.961045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:38.970903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:38.971015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:38.971045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:38.971060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:38.971074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:38.971102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:38.980828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:38.980934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:38.980959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:38.980974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:38.980987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:38.981015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:38.990872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:38.990981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:38.991006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:38.991021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:38.991034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:38.991062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.000884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.001000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.001025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.001040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.001053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.001082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.010949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.011067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.011091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.011106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.011118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.011150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.020937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.021047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.021072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.021087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.021100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.021128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.030974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.031080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.031105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.031120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.031133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.031161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.040997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.041126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.041150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.041165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.041178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.041206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.051075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.051203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.051228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.051242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.051255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.051284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:23.751 [2024-07-15 23:28:39.061082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:23.751 [2024-07-15 23:28:39.061205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:23.751 [2024-07-15 23:28:39.061234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:23.751 [2024-07-15 23:28:39.061249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:23.751 [2024-07-15 23:28:39.061263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:23.751 [2024-07-15 23:28:39.061291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:23.751 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.071122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.071246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.071271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.071285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.071298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.071326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.081120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.081236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.081260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.081275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.081288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.081317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.091197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.091324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.091350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.091366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.091379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.091408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.101211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.101366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.101391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.101406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.101425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.101453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.111210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.111335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.111360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.111375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.111388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.111416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.121312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.121434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.121459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.121474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.121487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.121514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.131289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.131419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.131444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.131459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.131472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.131501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.141322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.010 [2024-07-15 23:28:39.141441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.010 [2024-07-15 23:28:39.141466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.010 [2024-07-15 23:28:39.141480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.010 [2024-07-15 23:28:39.141493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.010 [2024-07-15 23:28:39.141522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.010 qpair failed and we were unable to recover it. 00:25:24.010 [2024-07-15 23:28:39.151339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.151469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.151495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.151510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.151523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.151551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.161358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.161476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.161501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.161516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.161529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.161557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.171362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.171530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.171555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.171570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.171583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.171611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.181392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.181526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.181551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.181566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.181579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.181607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.191389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.191505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.191530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.191545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.191563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.191592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.201422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.201549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.201575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.201590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.201602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.201631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.211463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.211593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.211618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.211633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.211646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.211675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.221514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.221635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.221660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.221674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.221688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.221716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.231512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.231635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.231661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.231676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.231689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.231718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.241549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.241725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.241758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.241774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.241787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.241816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.011 qpair failed and we were unable to recover it. 00:25:24.011 [2024-07-15 23:28:39.251571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.011 [2024-07-15 23:28:39.251694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.011 [2024-07-15 23:28:39.251719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.011 [2024-07-15 23:28:39.251734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.011 [2024-07-15 23:28:39.251756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.011 [2024-07-15 23:28:39.251785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.261618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.261799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.261824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.261839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.261852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.261881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.271667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.271789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.271815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.271830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.271843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.271872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.281668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.281836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.281861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.281876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.281894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.281924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.291745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.291889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.291915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.291930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.291943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.291972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.301760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.301895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.301920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.301935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.301948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.301977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.311805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.311913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.311939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.311954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.311967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.311995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.012 [2024-07-15 23:28:39.321842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.012 [2024-07-15 23:28:39.321944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.012 [2024-07-15 23:28:39.321969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.012 [2024-07-15 23:28:39.321983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.012 [2024-07-15 23:28:39.321996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.012 [2024-07-15 23:28:39.322025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.012 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.331866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.332001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.332027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.332042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.332055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.332083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.341875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.341996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.342021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.342036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.342049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.342077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.351898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.352009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.352035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.352049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.352062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.352090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.361898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.362040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.362065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.362079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.362092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.362120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.371953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.372065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.372091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.372112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.372125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.372154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.381969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.382076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.382100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.382115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.382128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.382156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.392018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.392130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.392155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.392169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.392183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.392211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.402048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.402170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.402195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.402209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.402222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.402251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.271 [2024-07-15 23:28:39.412089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.271 [2024-07-15 23:28:39.412240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.271 [2024-07-15 23:28:39.412265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.271 [2024-07-15 23:28:39.412280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.271 [2024-07-15 23:28:39.412293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.271 [2024-07-15 23:28:39.412321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.271 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.422058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.422183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.422208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.422223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.422236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.422264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.432124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.432278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.432305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.432320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.432333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.432361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.442120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.442243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.442268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.442283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.442296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.442324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.452153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.452280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.452306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.452321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.452334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.452362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.462172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.462311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.462336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.462357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.462371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.462401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.472237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.472359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.472384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.472398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.472412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.472440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.482226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.482389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.482414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.482430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.482443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.482471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.492247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.492375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.492400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.492415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.492428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.492456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.502283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.502419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.502444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.502459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.502472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.502500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.512318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.512433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.272 [2024-07-15 23:28:39.512458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.272 [2024-07-15 23:28:39.512473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.272 [2024-07-15 23:28:39.512486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.272 [2024-07-15 23:28:39.512514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.272 qpair failed and we were unable to recover it. 00:25:24.272 [2024-07-15 23:28:39.522341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.272 [2024-07-15 23:28:39.522473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.522499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.522514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.522527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.522555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.532385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.532508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.532534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.532548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.532562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.532590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.542428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.542592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.542617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.542632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.542645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.542674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.552498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.552623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.552648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.552669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.552683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.552722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.562481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.562634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.562659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.562673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.562686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.562716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.572519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.572656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.572681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.572696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.572709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.572752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.273 [2024-07-15 23:28:39.582619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.273 [2024-07-15 23:28:39.582757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.273 [2024-07-15 23:28:39.582782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.273 [2024-07-15 23:28:39.582797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.273 [2024-07-15 23:28:39.582809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.273 [2024-07-15 23:28:39.582838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.273 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.592553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.592681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.592706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.592721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.592734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.592780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.602568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.602704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.602744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.602763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.602776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.602805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.612638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.612765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.612791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.612806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.612820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.612848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.622616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.622733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.622764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.622779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.622793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.622822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.632751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.632860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.632885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.632899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.632912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.632950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.642675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.642789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.642821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.642836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.642849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.642878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.652783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.652894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.652920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.652935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.652948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.652977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.662801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.662920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.662945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.662960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.662973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.663002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.672781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.672879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.532 [2024-07-15 23:28:39.672904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.532 [2024-07-15 23:28:39.672919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.532 [2024-07-15 23:28:39.672932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.532 [2024-07-15 23:28:39.672960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.532 qpair failed and we were unable to recover it. 00:25:24.532 [2024-07-15 23:28:39.682832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.532 [2024-07-15 23:28:39.682932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.682957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.682972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.682985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.683018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.692880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.692992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.693017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.693032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.693045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.693072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.702897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.703024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.703049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.703064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.703077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.703104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.712907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.713005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.713030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.713044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.713057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.713085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.722945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.723048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.723074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.723088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.723101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.723129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.733006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.733146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.733177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.733192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.733205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.733233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.743011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.743115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.743140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.743155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.743168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.743195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.753083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.753204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.753228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.753243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.753256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.753284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.763074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.763194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.763219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.763233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.763247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.763285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.773117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.773246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.773271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.773285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.773299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.773334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.783150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.783345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.783370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.783385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.783398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.783427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.793173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.793342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.793367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.533 [2024-07-15 23:28:39.793382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.533 [2024-07-15 23:28:39.793394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.533 [2024-07-15 23:28:39.793428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.533 qpair failed and we were unable to recover it. 00:25:24.533 [2024-07-15 23:28:39.803168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.533 [2024-07-15 23:28:39.803287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.533 [2024-07-15 23:28:39.803313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.534 [2024-07-15 23:28:39.803327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.534 [2024-07-15 23:28:39.803340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.534 [2024-07-15 23:28:39.803368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.534 qpair failed and we were unable to recover it. 00:25:24.534 [2024-07-15 23:28:39.813197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.534 [2024-07-15 23:28:39.813337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.534 [2024-07-15 23:28:39.813362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.534 [2024-07-15 23:28:39.813376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.534 [2024-07-15 23:28:39.813390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.534 [2024-07-15 23:28:39.813418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.534 qpair failed and we were unable to recover it. 00:25:24.534 [2024-07-15 23:28:39.823262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.534 [2024-07-15 23:28:39.823381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.534 [2024-07-15 23:28:39.823411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.534 [2024-07-15 23:28:39.823426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.534 [2024-07-15 23:28:39.823439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.534 [2024-07-15 23:28:39.823468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.534 qpair failed and we were unable to recover it. 00:25:24.534 [2024-07-15 23:28:39.833299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.534 [2024-07-15 23:28:39.833419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.534 [2024-07-15 23:28:39.833444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.534 [2024-07-15 23:28:39.833458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.534 [2024-07-15 23:28:39.833472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.534 [2024-07-15 23:28:39.833500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.534 qpair failed and we were unable to recover it. 00:25:24.534 [2024-07-15 23:28:39.843352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.534 [2024-07-15 23:28:39.843489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.534 [2024-07-15 23:28:39.843514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.534 [2024-07-15 23:28:39.843529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.534 [2024-07-15 23:28:39.843542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.534 [2024-07-15 23:28:39.843570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.534 qpair failed and we were unable to recover it. 00:25:24.793 [2024-07-15 23:28:39.853355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.793 [2024-07-15 23:28:39.853486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.793 [2024-07-15 23:28:39.853511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.793 [2024-07-15 23:28:39.853525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.793 [2024-07-15 23:28:39.853538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.793 [2024-07-15 23:28:39.853567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.793 qpair failed and we were unable to recover it. 00:25:24.793 [2024-07-15 23:28:39.863352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.793 [2024-07-15 23:28:39.863478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.793 [2024-07-15 23:28:39.863503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.793 [2024-07-15 23:28:39.863518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.793 [2024-07-15 23:28:39.863532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.793 [2024-07-15 23:28:39.863566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.793 qpair failed and we were unable to recover it. 00:25:24.793 [2024-07-15 23:28:39.873434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.793 [2024-07-15 23:28:39.873563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.793 [2024-07-15 23:28:39.873588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.793 [2024-07-15 23:28:39.873603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.793 [2024-07-15 23:28:39.873616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.793 [2024-07-15 23:28:39.873645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.793 qpair failed and we were unable to recover it. 00:25:24.793 [2024-07-15 23:28:39.883436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.793 [2024-07-15 23:28:39.883553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.793 [2024-07-15 23:28:39.883578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.793 [2024-07-15 23:28:39.883592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.883606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.883636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.893462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.893599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.893625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.893639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.893652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.893680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.903507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.903649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.903674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.903689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.903702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.903730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.913504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.913627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.913658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.913673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.913686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.913715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.923578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.923699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.923724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.923746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.923761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.923790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.933595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.933726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.933758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.933775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.933788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.933816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.943632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.943769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.943794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.943809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.943822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.943851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.953604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.953725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.953757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.953772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.953791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.953819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.963632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.963756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.963781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.963796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.963808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.963846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.973691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.973822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.973848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.973863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.973875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.973904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.983706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.983837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.983863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.983878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.983891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.983919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:39.993710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:39.993845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:39.993871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:39.993886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:39.993899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:39.993927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:40.003767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:40.003876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:40.003902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:40.003917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:40.003930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:40.003959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:40.013891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:40.014036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:40.014063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:40.014078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:40.014091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:40.014125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.794 [2024-07-15 23:28:40.023880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.794 [2024-07-15 23:28:40.024070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.794 [2024-07-15 23:28:40.024096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.794 [2024-07-15 23:28:40.024112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.794 [2024-07-15 23:28:40.024125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.794 [2024-07-15 23:28:40.024158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.794 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.033847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.033987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.034013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.034028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.034042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.034071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.043938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.044045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.044071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.044086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.044109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.044139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.053939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.054051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.054077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.054092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.054105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.054134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.063947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.064073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.064099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.064113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.064127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.064155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.073960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.074068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.074093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.074108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.074121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.074149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.084043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.084167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.084193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.084207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.084221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.084249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.094055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.094189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.094215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.094230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.094244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.094273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:24.795 [2024-07-15 23:28:40.104070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:24.795 [2024-07-15 23:28:40.104203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:24.795 [2024-07-15 23:28:40.104228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:24.795 [2024-07-15 23:28:40.104243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:24.795 [2024-07-15 23:28:40.104256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:24.795 [2024-07-15 23:28:40.104284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:24.795 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.114090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.114216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.114242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.114257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.114270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.114298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.124091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.124240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.124265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.124280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.124294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.124322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.134148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.134273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.134298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.134319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.134332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.134361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.144217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.144339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.144364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.144379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.144392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.144420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.154208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.154337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.154362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.154377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.154390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.154418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.164199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.164330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.164356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.164370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.164383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.164411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.174351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.174479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.174504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.174519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.174532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.174560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.184283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.184402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.184428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.184443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.184456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.184484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.194365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.194531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.194557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.194572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.055 [2024-07-15 23:28:40.194585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.055 [2024-07-15 23:28:40.194613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.055 qpair failed and we were unable to recover it. 00:25:25.055 [2024-07-15 23:28:40.204312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.055 [2024-07-15 23:28:40.204486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.055 [2024-07-15 23:28:40.204511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.055 [2024-07-15 23:28:40.204526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.204539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.204569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.214392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.214522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.214548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.214563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.214576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.214605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.224387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.224562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.224588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.224609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.224623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.224652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.234414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.234530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.234555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.234570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.234583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.234612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.244404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.244526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.244551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.244566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.244579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.244608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.254449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.254578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.254603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.254617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.254631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.254660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.264541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.264677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.264702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.264717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.264730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.264767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.274538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.274661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.274687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.274701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.274716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.274751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.284560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.284709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.284734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.284763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.284778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.284807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.294644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.294781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.294806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.294821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.294835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.294863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.304633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.304764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.304790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.304804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.304817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.304847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.314623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.314752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.314778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.314799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.314813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.314841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.324731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.324852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.324878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.324893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.324906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.324935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.334743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.334878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.334903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.334918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.334931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.334959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.344693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.056 [2024-07-15 23:28:40.344843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.056 [2024-07-15 23:28:40.344869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.056 [2024-07-15 23:28:40.344884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.056 [2024-07-15 23:28:40.344898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.056 [2024-07-15 23:28:40.344927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.056 qpair failed and we were unable to recover it. 00:25:25.056 [2024-07-15 23:28:40.354774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.057 [2024-07-15 23:28:40.354887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.057 [2024-07-15 23:28:40.354912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.057 [2024-07-15 23:28:40.354927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.057 [2024-07-15 23:28:40.354940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.057 [2024-07-15 23:28:40.354969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.057 qpair failed and we were unable to recover it. 00:25:25.057 [2024-07-15 23:28:40.364796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.057 [2024-07-15 23:28:40.364902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.057 [2024-07-15 23:28:40.364927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.057 [2024-07-15 23:28:40.364942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.057 [2024-07-15 23:28:40.364957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.057 [2024-07-15 23:28:40.364987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.057 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.374850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.374961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.374986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.375001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.375014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.375042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.384831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.384940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.384965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.384980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.384993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.385021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.394863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.394972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.394998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.395012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.395026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.395054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.404900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.405004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.405037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.405053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.405066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.405095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.414973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.415083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.415109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.415124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.415137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.415165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.424953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.425080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.316 [2024-07-15 23:28:40.425105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.316 [2024-07-15 23:28:40.425120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.316 [2024-07-15 23:28:40.425133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.316 [2024-07-15 23:28:40.425162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.316 qpair failed and we were unable to recover it. 00:25:25.316 [2024-07-15 23:28:40.434988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.316 [2024-07-15 23:28:40.435122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.435147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.435162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.435175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.435203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.445014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.445123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.445149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.445163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.445176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.445204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.455116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.455246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.455273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.455288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.455300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.455329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.465068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.465191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.465217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.465232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.465245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.465273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.475063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.475194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.475219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.475234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.475247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.475275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.485219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.485345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.485370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.485384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.485397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.485426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.495133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.495263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.495293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.495308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.495321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.495349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.505173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.505343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.505368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.505383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.505396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.505424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.515208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.515330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.515355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.515370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.515383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.515411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.525263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.525412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.525437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.525451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.525464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.525507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.535313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.535448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.535473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.535488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.535501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.535545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.545313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.545457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.545483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.545498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.545511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.545540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.555347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.555498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.555524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.555538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.555551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.555579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.565378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.565497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.565522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.565537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.565551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.317 [2024-07-15 23:28:40.565580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.317 qpair failed and we were unable to recover it. 00:25:25.317 [2024-07-15 23:28:40.575431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.317 [2024-07-15 23:28:40.575559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.317 [2024-07-15 23:28:40.575584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.317 [2024-07-15 23:28:40.575599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.317 [2024-07-15 23:28:40.575613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.575641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.318 [2024-07-15 23:28:40.585468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.318 [2024-07-15 23:28:40.585646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.318 [2024-07-15 23:28:40.585676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.318 [2024-07-15 23:28:40.585691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.318 [2024-07-15 23:28:40.585704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.585733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.318 [2024-07-15 23:28:40.595425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.318 [2024-07-15 23:28:40.595541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.318 [2024-07-15 23:28:40.595566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.318 [2024-07-15 23:28:40.595581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.318 [2024-07-15 23:28:40.595594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.595622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.318 [2024-07-15 23:28:40.605476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.318 [2024-07-15 23:28:40.605643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.318 [2024-07-15 23:28:40.605668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.318 [2024-07-15 23:28:40.605683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.318 [2024-07-15 23:28:40.605696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.605724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.318 [2024-07-15 23:28:40.615490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.318 [2024-07-15 23:28:40.615618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.318 [2024-07-15 23:28:40.615643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.318 [2024-07-15 23:28:40.615658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.318 [2024-07-15 23:28:40.615671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.615699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.318 [2024-07-15 23:28:40.625513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.318 [2024-07-15 23:28:40.625634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.318 [2024-07-15 23:28:40.625659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.318 [2024-07-15 23:28:40.625674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.318 [2024-07-15 23:28:40.625687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.318 [2024-07-15 23:28:40.625722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.318 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.635552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.635669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.635695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.635709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.635722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.635758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.645571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.645698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.645723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.645746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.645762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.645791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.655605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.655802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.655827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.655842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.655855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.655884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.665620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.665746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.665772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.665786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.665799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.665828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.675634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.675760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.675792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.675808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.675822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.675851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.685639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.685799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.685825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.685840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.685853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.685881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.695682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.695817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.695842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.695857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.695870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.695899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.705713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.705841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.705866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.705880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.705894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.705922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.715794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.715903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.715928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.715943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.715962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.715991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.725797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.725906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.725931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.725946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.725960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.725989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.735809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.735922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.735947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.735962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.735976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.736004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.745851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.746008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.746033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.746048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.746061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.746090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.755859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.755961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.755986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.756002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.756015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.756043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.765885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.766003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.766028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.766043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.577 [2024-07-15 23:28:40.766056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.577 [2024-07-15 23:28:40.766085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.577 qpair failed and we were unable to recover it. 00:25:25.577 [2024-07-15 23:28:40.776058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.577 [2024-07-15 23:28:40.776188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.577 [2024-07-15 23:28:40.776214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.577 [2024-07-15 23:28:40.776229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.776242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.776271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.785959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.786076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.786101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.786115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.786129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.786158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.795981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.796111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.796136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.796151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.796164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.796192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.805987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.806092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.806117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.806132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.806150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.806179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.816085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.816236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.816261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.816276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.816289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.816317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.826045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.826177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.826202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.826216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.826230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.826258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.836093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.836212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.836237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.836252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.836265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.836293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.846155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.846289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.846313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.846328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.846341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.846369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.856239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.856373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.856398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.856413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.856426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.856455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.866183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.866350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.866375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.866389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.866402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.866431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.876214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.876336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.876361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.876375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.876388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.876417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.578 [2024-07-15 23:28:40.886241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.578 [2024-07-15 23:28:40.886365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.578 [2024-07-15 23:28:40.886390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.578 [2024-07-15 23:28:40.886405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.578 [2024-07-15 23:28:40.886418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.578 [2024-07-15 23:28:40.886447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.578 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.896265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.896376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.896402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.896416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.837 [2024-07-15 23:28:40.896435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.837 [2024-07-15 23:28:40.896464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.837 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.906290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.906412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.906437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.906452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.837 [2024-07-15 23:28:40.906465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.837 [2024-07-15 23:28:40.906493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.837 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.916392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.916511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.916537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.916551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.837 [2024-07-15 23:28:40.916564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.837 [2024-07-15 23:28:40.916593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.837 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.926329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.926453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.926479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.926493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.837 [2024-07-15 23:28:40.926506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.837 [2024-07-15 23:28:40.926538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.837 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.936368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.936535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.936560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.936575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.837 [2024-07-15 23:28:40.936588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.837 [2024-07-15 23:28:40.936616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.837 qpair failed and we were unable to recover it. 00:25:25.837 [2024-07-15 23:28:40.946376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.837 [2024-07-15 23:28:40.946497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.837 [2024-07-15 23:28:40.946522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.837 [2024-07-15 23:28:40.946536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.946549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.946578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:40.956433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:40.956557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:40.956582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:40.956597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.956610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.956639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:40.966465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:40.966583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:40.966608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:40.966623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.966635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.966663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:40.976493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:40.976619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:40.976644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:40.976658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.976671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.976699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:40.986506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:40.986638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:40.986663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:40.986684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.986697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.986727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:40.996538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:40.996666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:40.996692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:40.996707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:40.996720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:40.996753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.006568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.006688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.006713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.006727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.006747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.006777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.016712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.016848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.016871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.016886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.016898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.016925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.026743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.026899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.026924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.026939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.026952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.026981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.036630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.036765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.036790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.036804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.036817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.036846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.046681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.046835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.046861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.046875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.046888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.046917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.056800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.056910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.056936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.056951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.056964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.056992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.066796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.066908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.066934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.066949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.066962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.066990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.076790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.076894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.076919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.076940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.838 [2024-07-15 23:28:41.076954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.838 [2024-07-15 23:28:41.076983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.838 qpair failed and we were unable to recover it. 00:25:25.838 [2024-07-15 23:28:41.086817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.838 [2024-07-15 23:28:41.086965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.838 [2024-07-15 23:28:41.086990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.838 [2024-07-15 23:28:41.087005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.087018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.087047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.096843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.096998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.097022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.097038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.097051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.097079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.106842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.106955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.106981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.106995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.107009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.107037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.116903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.117011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.117037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.117052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.117066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.117095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.126915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.127035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.127060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.127074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.127087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.127115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.136949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.137056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.137081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.137096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.137109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.137137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:25.839 [2024-07-15 23:28:41.147053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:25.839 [2024-07-15 23:28:41.147179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:25.839 [2024-07-15 23:28:41.147205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:25.839 [2024-07-15 23:28:41.147220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:25.839 [2024-07-15 23:28:41.147232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:25.839 [2024-07-15 23:28:41.147263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:25.839 qpair failed and we were unable to recover it. 00:25:26.098 [2024-07-15 23:28:41.157016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.098 [2024-07-15 23:28:41.157126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.098 [2024-07-15 23:28:41.157151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.098 [2024-07-15 23:28:41.157166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.098 [2024-07-15 23:28:41.157179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.098 [2024-07-15 23:28:41.157209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.098 qpair failed and we were unable to recover it. 00:25:26.098 [2024-07-15 23:28:41.167073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.098 [2024-07-15 23:28:41.167202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.098 [2024-07-15 23:28:41.167234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.098 [2024-07-15 23:28:41.167250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.098 [2024-07-15 23:28:41.167263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.098 [2024-07-15 23:28:41.167302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.098 qpair failed and we were unable to recover it. 00:25:26.098 [2024-07-15 23:28:41.177112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.098 [2024-07-15 23:28:41.177236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.098 [2024-07-15 23:28:41.177261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.098 [2024-07-15 23:28:41.177276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.098 [2024-07-15 23:28:41.177289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.098 [2024-07-15 23:28:41.177317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.098 qpair failed and we were unable to recover it. 00:25:26.098 [2024-07-15 23:28:41.187123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.098 [2024-07-15 23:28:41.187248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.098 [2024-07-15 23:28:41.187273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.098 [2024-07-15 23:28:41.187288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.098 [2024-07-15 23:28:41.187301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.098 [2024-07-15 23:28:41.187329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.098 qpair failed and we were unable to recover it. 00:25:26.098 [2024-07-15 23:28:41.197111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.098 [2024-07-15 23:28:41.197231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.098 [2024-07-15 23:28:41.197256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.197271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.197284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.197313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.207222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.207364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.207393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.207408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.207421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.207454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.217221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.217349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.217374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.217389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.217402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.217441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.227228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.227356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.227381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.227396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.227409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.227437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.237310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.237444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.237469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.237483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.237497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.237534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.247296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.247425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.247450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.247465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.247478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.247506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.257385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.257518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.257548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.257564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.257577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.257606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.267334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.267458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.267483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.267499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.267512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.267539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.277354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.277483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.277508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.277523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.277535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.277564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.287438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.287559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.287584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.287599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.287612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.287641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.297421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.297547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.297572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.297587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.297600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.297643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.307443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.307564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.307590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.307604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.307617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.307656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.317498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.317652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.317678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.317692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.317705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.317733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.327586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.327712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.327744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.327762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.099 [2024-07-15 23:28:41.327775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.099 [2024-07-15 23:28:41.327806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.099 qpair failed and we were unable to recover it. 00:25:26.099 [2024-07-15 23:28:41.337616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.099 [2024-07-15 23:28:41.337752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.099 [2024-07-15 23:28:41.337777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.099 [2024-07-15 23:28:41.337792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.337805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.337833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.347555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.347719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.347756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.347772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.347785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.347813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.357626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.357785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.357811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.357825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.357838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.357866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.367700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.367836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.367861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.367875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.367888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.367917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.377673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.377858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.377883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.377898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.377912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.377940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.387668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.387787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.387813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.387827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.387840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.387886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.397657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.397794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.397819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.397834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.397847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.397875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.100 [2024-07-15 23:28:41.407795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.100 [2024-07-15 23:28:41.407921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.100 [2024-07-15 23:28:41.407945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.100 [2024-07-15 23:28:41.407959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.100 [2024-07-15 23:28:41.407972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.100 [2024-07-15 23:28:41.408003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.100 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.417827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.417946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.417972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.417987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.418000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.418028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.427790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.427895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.427920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.427935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.427948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.427976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.437802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.437906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.437937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.437952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.437965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.438005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.447830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.447974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.447999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.448013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.448026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.448055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.457860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.457970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.457996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.458011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.458024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.458052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.467886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.468030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.468054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.468069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.468082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.468120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.477982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.478097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.478121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.478136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.478154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.478194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.487913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.488029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.488054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.488068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.488081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.488110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.497988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.498098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.498124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.498140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.498154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.498182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.508004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.508112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.508138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.360 [2024-07-15 23:28:41.508153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.360 [2024-07-15 23:28:41.508166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.360 [2024-07-15 23:28:41.508194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.360 qpair failed and we were unable to recover it. 00:25:26.360 [2024-07-15 23:28:41.518039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.360 [2024-07-15 23:28:41.518176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.360 [2024-07-15 23:28:41.518202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.518217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.518230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.518258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.528060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.528175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.528200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.528215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.528227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.528255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.538089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.538213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.538239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.538254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.538267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.538302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.548177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.548308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.548333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.548348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.548361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.548390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.558144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.558269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.558295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.558310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.558323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.558351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.568198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.568317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.568342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.568356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.568375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.568404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.578218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.578325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.578351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.578365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.578379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.578407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.588214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.588378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.588402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.588417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.588430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.588459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.598239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.598382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.598407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.598422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.598435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.598475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.608326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.608489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.608514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.608528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.608542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.608571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.618304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.618433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.618459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.618474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.618487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.618514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.628438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.628544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.628569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.628584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.628598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.628625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.638367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.638483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.638509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.638524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.638537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.638565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.648390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.648508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.648533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.648547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.648560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.648589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.361 qpair failed and we were unable to recover it. 00:25:26.361 [2024-07-15 23:28:41.658421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.361 [2024-07-15 23:28:41.658543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.361 [2024-07-15 23:28:41.658569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.361 [2024-07-15 23:28:41.658584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.361 [2024-07-15 23:28:41.658602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.361 [2024-07-15 23:28:41.658630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.362 qpair failed and we were unable to recover it. 00:25:26.362 [2024-07-15 23:28:41.668529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.362 [2024-07-15 23:28:41.668655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.362 [2024-07-15 23:28:41.668680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.362 [2024-07-15 23:28:41.668695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.362 [2024-07-15 23:28:41.668708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.362 [2024-07-15 23:28:41.668743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.362 qpair failed and we were unable to recover it. 00:25:26.620 [2024-07-15 23:28:41.678473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.620 [2024-07-15 23:28:41.678576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.620 [2024-07-15 23:28:41.678601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.620 [2024-07-15 23:28:41.678616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.620 [2024-07-15 23:28:41.678629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.620 [2024-07-15 23:28:41.678656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.620 qpair failed and we were unable to recover it. 00:25:26.620 [2024-07-15 23:28:41.688516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.620 [2024-07-15 23:28:41.688657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.620 [2024-07-15 23:28:41.688683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.620 [2024-07-15 23:28:41.688697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.620 [2024-07-15 23:28:41.688710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.620 [2024-07-15 23:28:41.688757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.620 qpair failed and we were unable to recover it. 00:25:26.620 [2024-07-15 23:28:41.698567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.620 [2024-07-15 23:28:41.698732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.620 [2024-07-15 23:28:41.698764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.620 [2024-07-15 23:28:41.698779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.698792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9f1e0 00:25:26.621 [2024-07-15 23:28:41.698820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.708619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.708759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.708799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.708815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.708829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.708870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.718619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.718755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.718782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.718798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.718811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.718854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.728644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.728786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.728813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.728829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.728842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.728873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.738764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.738881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.738908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.738923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.738937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.738967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.748700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.748825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.748852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.748873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.748887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.748918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.758755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.758909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.758935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.758951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.758964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.758995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.768774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.768878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.768903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.768918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.768931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.768961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.778828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.778973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.778999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.779013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.779027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.779057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.788825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.788936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.788962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.788977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.788990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.789019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.798849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.798957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.798983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.798998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.799011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.799041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.808883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.809001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.809026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.809041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.809055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.809085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.818912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.819032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.819058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.819073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.819086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.819116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.828911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.829022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.829049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.829064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.829078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.829108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.621 [2024-07-15 23:28:41.839041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.621 [2024-07-15 23:28:41.839158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.621 [2024-07-15 23:28:41.839188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.621 [2024-07-15 23:28:41.839205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.621 [2024-07-15 23:28:41.839219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.621 [2024-07-15 23:28:41.839249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.621 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.848990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.849132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.849159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.849174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.849187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.849217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.859016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.859187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.859213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.859228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.859241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.859270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.869069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.869212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.869238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.869253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.869266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.869295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.879164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.879293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.879319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.879334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.879347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.879383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.889109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.889240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.889266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.889281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.889294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.889324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.899141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.899276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.899301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.899317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.899330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.899360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.909155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.909317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.909342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.909357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.909371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.909401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.919192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.919317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.919343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.919359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.919372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.919403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.622 [2024-07-15 23:28:41.929251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.622 [2024-07-15 23:28:41.929367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.622 [2024-07-15 23:28:41.929398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.622 [2024-07-15 23:28:41.929414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.622 [2024-07-15 23:28:41.929427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.622 [2024-07-15 23:28:41.929457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.622 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.939352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.939468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.939494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.939508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.939522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.939553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.949268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.949443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.949469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.949484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.949498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.949528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.959353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.959538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.959564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.959579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.959592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.959622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.969317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.969481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.969507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.969522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.969534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.969571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.979384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.979555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.979580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.979595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.979608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.979638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.989398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.989560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.989585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.989600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.989613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.989644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:41.999464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.881 [2024-07-15 23:28:41.999593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.881 [2024-07-15 23:28:41.999618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.881 [2024-07-15 23:28:41.999633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.881 [2024-07-15 23:28:41.999647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.881 [2024-07-15 23:28:41.999678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.881 qpair failed and we were unable to recover it. 00:25:26.881 [2024-07-15 23:28:42.009478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.009599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.009624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.009639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.009652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.009682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.019485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.019610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.019639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.019654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.019666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.019696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.029512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.029669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.029695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.029710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.029723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.029765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.039565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.039693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.039719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.039733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.039757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.039788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.049567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.049707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.049732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.049756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.049770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.049800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.059661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.059848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.059874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.059889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.059907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.059940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.069734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.069856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.069881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.069896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.069909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.069939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.079660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.079789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.079814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.079829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.079842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.079874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.089684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.089814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.089840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.089855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.089869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.089899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.099751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.099864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.099888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.099903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.099916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.099946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.109785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.882 [2024-07-15 23:28:42.109900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.882 [2024-07-15 23:28:42.109926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.882 [2024-07-15 23:28:42.109942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.882 [2024-07-15 23:28:42.109955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.882 [2024-07-15 23:28:42.109985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.882 qpair failed and we were unable to recover it. 00:25:26.882 [2024-07-15 23:28:42.119735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.119884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.119910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.119926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.119939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.119969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.129787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.129900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.129926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.129942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.129956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.129988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.139897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.140008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.140034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.140049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.140068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.140098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.149902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.150034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.150059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.150080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.150094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.150125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.159915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.160034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.160060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.160075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.160088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.160118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.169994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.170114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.170140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.170155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.170168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.170199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.180002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.180140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.180165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.180180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.180193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.180223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:26.883 [2024-07-15 23:28:42.190010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:26.883 [2024-07-15 23:28:42.190149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:26.883 [2024-07-15 23:28:42.190175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:26.883 [2024-07-15 23:28:42.190190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:26.883 [2024-07-15 23:28:42.190204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:26.883 [2024-07-15 23:28:42.190233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:26.883 qpair failed and we were unable to recover it. 00:25:27.142 [2024-07-15 23:28:42.200068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.142 [2024-07-15 23:28:42.200255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.142 [2024-07-15 23:28:42.200281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.142 [2024-07-15 23:28:42.200296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.142 [2024-07-15 23:28:42.200309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.142 [2024-07-15 23:28:42.200339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.142 qpair failed and we were unable to recover it. 00:25:27.142 [2024-07-15 23:28:42.210044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.142 [2024-07-15 23:28:42.210160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.142 [2024-07-15 23:28:42.210186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.142 [2024-07-15 23:28:42.210201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.142 [2024-07-15 23:28:42.210214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.142 [2024-07-15 23:28:42.210244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.142 qpair failed and we were unable to recover it. 00:25:27.142 [2024-07-15 23:28:42.220102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.142 [2024-07-15 23:28:42.220245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.142 [2024-07-15 23:28:42.220270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.142 [2024-07-15 23:28:42.220284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.142 [2024-07-15 23:28:42.220299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.142 [2024-07-15 23:28:42.220330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.142 qpair failed and we were unable to recover it. 00:25:27.142 [2024-07-15 23:28:42.230120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.230257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.230282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.230296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.230309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.230340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.240143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.240262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.240287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.240307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.240321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.240352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.250181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.250303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.250328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.250343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.250356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.250387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.260204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.260331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.260356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.260371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.260384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.260413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.270242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.270372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.270397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.270412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.270425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.270454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.280324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.280456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.280482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.280496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.280509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.280539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.290299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.290466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.290491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.290507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.290520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.290550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.300410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.300548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.300573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.300588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.300602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.300631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.310351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.310476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.310502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.310516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.310530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.310563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.320397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.320519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.320544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.320558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.320572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.320601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.330406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.330529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.143 [2024-07-15 23:28:42.330560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.143 [2024-07-15 23:28:42.330576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.143 [2024-07-15 23:28:42.330590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.143 [2024-07-15 23:28:42.330619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.143 qpair failed and we were unable to recover it. 00:25:27.143 [2024-07-15 23:28:42.340413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.143 [2024-07-15 23:28:42.340540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.340566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.340580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.340594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.340624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.350450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.350581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.350607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.350622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.350635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.350665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.360497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.360615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.360641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.360657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.360670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.360701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.370504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.370623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.370649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.370663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.370677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.370713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.380517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.380641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.380667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.380682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.380696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.380726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.390567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.390691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.390717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.390731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.390756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.390788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.400606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.400748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.400774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.400788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.400801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.400831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.410637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.410773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.410799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.410814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.410827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.410857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.420656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.420792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.420822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.420838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.420851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.420882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.430681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.430811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.430837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.430852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.430866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.430896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.440752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.440900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.440926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.440941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.440954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.440985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.144 [2024-07-15 23:28:42.450758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.144 [2024-07-15 23:28:42.450864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.144 [2024-07-15 23:28:42.450889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.144 [2024-07-15 23:28:42.450905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.144 [2024-07-15 23:28:42.450919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.144 [2024-07-15 23:28:42.450950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.144 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.460799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.460918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.460943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.460959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.460980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.461012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.470804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.470911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.470936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.470951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.470965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.470996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.480832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.480991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.481016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.481031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.481044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.481074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.490804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.490913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.490939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.490954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.490967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.490999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.500948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.501059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.501084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.501099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.501111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.501142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.510941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.511060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.511085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.511100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.511114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.511144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.520968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.521075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.521100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.521115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.521128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.521157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.530991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.531130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.531155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.531169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.531183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.531213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.541093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.541244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.541270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.541285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.541298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.541329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.551032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.404 [2024-07-15 23:28:42.551169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.404 [2024-07-15 23:28:42.551194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.404 [2024-07-15 23:28:42.551215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.404 [2024-07-15 23:28:42.551229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.404 [2024-07-15 23:28:42.551260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.404 qpair failed and we were unable to recover it. 00:25:27.404 [2024-07-15 23:28:42.561064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.561185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.561211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.561226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.561239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.561268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.571120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.571249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.571275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.571290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.571304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.571333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.581140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.581267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.581293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.581308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.581321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.581351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.591168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.591305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.591330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.591346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.591359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.591389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.601255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.601371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.601397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.601411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.601425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.601464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.611173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.611305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.611330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.611345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.611361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.611393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.621238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.621377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.621402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.621416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.621429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.621460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.631234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.631363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.631389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.631403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.631417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.631447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.641259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.641392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.641418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.641439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.641453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.641483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.651270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.651389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.651414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.651429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.651442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.651473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.661411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.661547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.661572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.661587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.661601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.661630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.671332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.671468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.671493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.671507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.671521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.671551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.681402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.681528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.681554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.681568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.681582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.681612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.691389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.691536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.691562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.691576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.405 [2024-07-15 23:28:42.691589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.405 [2024-07-15 23:28:42.691620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.405 qpair failed and we were unable to recover it. 00:25:27.405 [2024-07-15 23:28:42.701454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.405 [2024-07-15 23:28:42.701583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.405 [2024-07-15 23:28:42.701608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.405 [2024-07-15 23:28:42.701623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.406 [2024-07-15 23:28:42.701637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.406 [2024-07-15 23:28:42.701666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.406 qpair failed and we were unable to recover it. 00:25:27.406 [2024-07-15 23:28:42.711445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.406 [2024-07-15 23:28:42.711583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.406 [2024-07-15 23:28:42.711609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.406 [2024-07-15 23:28:42.711623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.406 [2024-07-15 23:28:42.711636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.406 [2024-07-15 23:28:42.711668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.406 qpair failed and we were unable to recover it. 00:25:27.664 [2024-07-15 23:28:42.721502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.664 [2024-07-15 23:28:42.721627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.664 [2024-07-15 23:28:42.721652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.664 [2024-07-15 23:28:42.721667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.664 [2024-07-15 23:28:42.721680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.664 [2024-07-15 23:28:42.721711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.664 qpair failed and we were unable to recover it. 00:25:27.664 [2024-07-15 23:28:42.731530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.664 [2024-07-15 23:28:42.731649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.664 [2024-07-15 23:28:42.731679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.664 [2024-07-15 23:28:42.731695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.664 [2024-07-15 23:28:42.731709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.664 [2024-07-15 23:28:42.731749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.664 qpair failed and we were unable to recover it. 00:25:27.664 [2024-07-15 23:28:42.741574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.664 [2024-07-15 23:28:42.741714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.664 [2024-07-15 23:28:42.741747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.664 [2024-07-15 23:28:42.741765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.664 [2024-07-15 23:28:42.741779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.664 [2024-07-15 23:28:42.741819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.664 qpair failed and we were unable to recover it. 00:25:27.664 [2024-07-15 23:28:42.751599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.751719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.751755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.751772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.751785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.751818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.761645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.761770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.761796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.761811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.761824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.761854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.771638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.771772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.771798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.771813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.771826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.771862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.781794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.781907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.781933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.781948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.781962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.781992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.791704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.791831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.791857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.791873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.791886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.791916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.801758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.801863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.801889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.801903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.801917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.801947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.811779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.811932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.811958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.811972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.811986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.812017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.821820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.821946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.821977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.821992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.822006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.822036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.831822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.831937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.831962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.831977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.831991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.832020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.841889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.842003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.842029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.842043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.842056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.842089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.851909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.665 [2024-07-15 23:28:42.852011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.665 [2024-07-15 23:28:42.852037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.665 [2024-07-15 23:28:42.852052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.665 [2024-07-15 23:28:42.852065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.665 [2024-07-15 23:28:42.852095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.665 qpair failed and we were unable to recover it. 00:25:27.665 [2024-07-15 23:28:42.861924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.862052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.862077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.862092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.862111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.666 [2024-07-15 23:28:42.862142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.871944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.872051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.872077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.872092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.872106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a20000b90 00:25:27.666 [2024-07-15 23:28:42.872136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.882054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.882180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.882211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.882227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.882241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.882275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.892046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.892167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.892195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.892210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.892224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.892254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.902098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.902244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.902270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.902285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.902299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.902332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.912108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.912232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.912259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.912274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.912288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.912318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.922126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.922285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.922311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.922326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.922339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.922368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.932158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.932278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.932305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.932319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.932332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.932363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.942177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.942302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.942328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.942342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.942356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.942387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.952206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.952334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.952361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.952376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.952395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.952428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.962239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.962405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.666 [2024-07-15 23:28:42.962432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.666 [2024-07-15 23:28:42.962446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.666 [2024-07-15 23:28:42.962460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.666 [2024-07-15 23:28:42.962492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.666 qpair failed and we were unable to recover it. 00:25:27.666 [2024-07-15 23:28:42.972292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.666 [2024-07-15 23:28:42.972447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.667 [2024-07-15 23:28:42.972474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.667 [2024-07-15 23:28:42.972490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.667 [2024-07-15 23:28:42.972502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.667 [2024-07-15 23:28:42.972532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.667 qpair failed and we were unable to recover it. 00:25:27.924 [2024-07-15 23:28:42.982314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:42.982441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:42.982468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:42.982483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:42.982497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:42.982530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:42.992335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:42.992457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:42.992484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:42.992499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:42.992513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:42.992544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.002399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.002525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.002552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.002567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.002581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.002611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.012424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.012548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.012575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.012590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.012603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.012634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.022436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.022562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.022586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.022601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.022613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.022656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.032429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.032563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.032590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.032606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.032619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.032659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.042432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.042554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.042581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.042602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.042616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.042647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.052497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.052619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.052645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.052660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.052673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.052703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.062613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.062734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.062766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.062781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.062795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.062826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.072615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.072772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.072800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.072815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.072829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.072870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.082638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.082792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.082819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.082834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.082847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.082889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.092626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.092796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.092823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.092838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.092851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.092881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.102637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.102771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.102797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.102812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.102825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.102863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.112637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.112770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.112797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.925 [2024-07-15 23:28:43.112812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.925 [2024-07-15 23:28:43.112825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.925 [2024-07-15 23:28:43.112855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.925 qpair failed and we were unable to recover it. 00:25:27.925 [2024-07-15 23:28:43.122668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.925 [2024-07-15 23:28:43.122825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.925 [2024-07-15 23:28:43.122852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.122867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.122889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.122920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.132755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.132860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.132891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.132907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.132921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.132952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.142798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.142910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.142936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.142951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.142965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.142994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.152801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.152913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.152939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.152954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.152967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.152997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.162838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.162963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.162989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.163004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.163018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.163048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.172835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.172953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.172979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.172994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.173007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.173044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.182880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.182992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.183018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.183033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.183047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.183076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.192882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:27.926 [2024-07-15 23:28:43.192984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:27.926 [2024-07-15 23:28:43.193010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:27.926 [2024-07-15 23:28:43.193025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:27.926 [2024-07-15 23:28:43.193037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7a10000b90 00:25:27.926 [2024-07-15 23:28:43.193070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.926 qpair failed and we were unable to recover it. 00:25:27.926 [2024-07-15 23:28:43.193193] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:27.926 A controller has encountered a failure and is being reset. 00:25:28.182 Controller properly reset. 00:25:28.182 Initializing NVMe Controllers 00:25:28.182 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:28.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:28.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:28.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:28.182 Initialization complete. Launching workers. 00:25:28.182 Starting thread on core 1 00:25:28.182 Starting thread on core 2 00:25:28.182 Starting thread on core 3 00:25:28.182 Starting thread on core 0 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:28.182 00:25:28.182 real 0m10.836s 00:25:28.182 user 0m18.470s 00:25:28.182 sys 0m5.492s 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:28.182 ************************************ 00:25:28.182 END TEST nvmf_target_disconnect_tc2 00:25:28.182 ************************************ 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.182 rmmod nvme_tcp 00:25:28.182 rmmod nvme_fabrics 00:25:28.182 rmmod nvme_keyring 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2446719 ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2446719 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2446719 ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2446719 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2446719 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2446719' 00:25:28.182 killing process with pid 2446719 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2446719 00:25:28.182 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2446719 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.439 23:28:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.970 23:28:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.970 00:25:30.970 real 0m15.587s 00:25:30.970 user 0m44.620s 00:25:30.970 sys 0m7.393s 00:25:30.970 23:28:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.970 23:28:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 ************************************ 00:25:30.970 END TEST nvmf_target_disconnect 00:25:30.970 ************************************ 00:25:30.970 23:28:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:30.970 23:28:45 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:30.970 23:28:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.970 23:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 23:28:45 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:30.970 00:25:30.970 real 19m43.155s 00:25:30.970 user 46m41.387s 00:25:30.970 sys 5m1.926s 00:25:30.970 23:28:45 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.970 23:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 ************************************ 00:25:30.970 END TEST nvmf_tcp 00:25:30.970 ************************************ 00:25:30.970 23:28:45 -- common/autotest_common.sh@1142 -- # return 0 00:25:30.970 23:28:45 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:30.970 23:28:45 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:30.970 23:28:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:30.970 23:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.970 23:28:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 ************************************ 00:25:30.970 START TEST spdkcli_nvmf_tcp 00:25:30.970 ************************************ 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:30.970 * Looking for test storage... 00:25:30.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2447907 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2447907 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2447907 ']' 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.970 23:28:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 [2024-07-15 23:28:45.960364] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:25:30.970 [2024-07-15 23:28:45.960441] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447907 ] 00:25:30.970 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.970 [2024-07-15 23:28:46.017927] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:30.970 [2024-07-15 23:28:46.132195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.970 [2024-07-15 23:28:46.132201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.970 23:28:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:30.970 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:30.970 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:30.970 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:30.970 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:30.970 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:30.970 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:30.970 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:30.970 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:30.970 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:30.970 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.970 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.970 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:30.970 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.970 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:30.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:30.971 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:30.971 ' 00:25:34.256 [2024-07-15 23:28:48.832495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.823 [2024-07-15 23:28:50.052928] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:37.346 [2024-07-15 23:28:52.312055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:39.276 [2024-07-15 23:28:54.254203] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:40.645 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:40.645 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:40.645 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:40.645 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:40.645 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:40.645 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:40.645 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:40.645 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:40.645 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:40.646 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:40.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:40.646 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:40.646 23:28:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:41.211 23:28:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:41.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:41.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:41.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:41.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:41.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:41.211 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:41.211 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:41.211 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:41.211 ' 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:46.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:46.472 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:46.472 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:46.472 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2447907 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2447907 ']' 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2447907 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2447907 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2447907' 00:25:46.472 killing process with pid 2447907 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2447907 00:25:46.472 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2447907 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2447907 ']' 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2447907 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2447907 ']' 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2447907 00:25:46.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2447907) - No such process 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2447907 is not found' 00:25:46.731 Process with pid 2447907 is not found 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:46.731 00:25:46.731 real 0m16.094s 00:25:46.731 user 0m33.929s 00:25:46.731 sys 0m0.834s 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.731 23:29:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.731 ************************************ 00:25:46.731 END TEST spdkcli_nvmf_tcp 00:25:46.731 ************************************ 00:25:46.731 23:29:01 -- common/autotest_common.sh@1142 -- # return 0 00:25:46.731 23:29:01 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:46.731 23:29:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:46.731 23:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.731 23:29:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.731 ************************************ 00:25:46.731 START TEST nvmf_identify_passthru 00:25:46.731 ************************************ 00:25:46.731 23:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:46.731 * Looking for test storage... 00:25:46.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.731 23:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.731 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.990 23:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.990 23:29:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:46.990 23:29:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.990 23:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.990 23:29:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:46.990 23:29:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.990 23:29:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.990 23:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:48.936 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:48.936 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:48.936 Found net devices under 0000:84:00.0: cvl_0_0 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.936 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:48.937 Found net devices under 0000:84:00.1: cvl_0_1 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:25:48.937 00:25:48.937 --- 10.0.0.2 ping statistics --- 00:25:48.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.937 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:48.937 00:25:48.937 --- 10.0.0.1 ping statistics --- 00:25:48.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.937 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.937 23:29:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.937 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.937 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:48.937 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:49.196 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:49.196 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:25:49.196 23:29:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:25:49.196 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:25:49.196 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:25:49.196 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:49.196 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:49.196 23:29:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:49.196 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.376 23:29:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:25:53.376 23:29:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:53.376 23:29:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:53.376 23:29:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:53.376 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.557 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:57.557 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:57.557 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.557 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.557 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:57.557 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:57.557 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.557 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2452564 00:25:57.557 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:57.558 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.558 23:29:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2452564 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2452564 ']' 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.558 23:29:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.558 [2024-07-15 23:29:12.796192] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:25:57.558 [2024-07-15 23:29:12.796271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.558 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.558 [2024-07-15 23:29:12.860521] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.815 [2024-07-15 23:29:12.968229] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.815 [2024-07-15 23:29:12.968295] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.815 [2024-07-15 23:29:12.968319] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.815 [2024-07-15 23:29:12.968330] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.815 [2024-07-15 23:29:12.968339] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.815 [2024-07-15 23:29:12.968422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.815 [2024-07-15 23:29:12.968484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.815 [2024-07-15 23:29:12.968551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.815 [2024-07-15 23:29:12.968553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.815 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.815 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:57.815 23:29:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.816 INFO: Log level set to 20 00:25:57.816 INFO: Requests: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "method": "nvmf_set_config", 00:25:57.816 "id": 1, 00:25:57.816 "params": { 00:25:57.816 "admin_cmd_passthru": { 00:25:57.816 "identify_ctrlr": true 00:25:57.816 } 00:25:57.816 } 00:25:57.816 } 00:25:57.816 00:25:57.816 INFO: response: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "id": 1, 00:25:57.816 "result": true 00:25:57.816 } 00:25:57.816 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.816 23:29:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.816 INFO: Setting log level to 20 00:25:57.816 INFO: Setting log level to 20 00:25:57.816 INFO: Log level set to 20 00:25:57.816 INFO: Log level set to 20 00:25:57.816 INFO: Requests: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "method": "framework_start_init", 00:25:57.816 "id": 1 00:25:57.816 } 00:25:57.816 00:25:57.816 INFO: Requests: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "method": "framework_start_init", 00:25:57.816 "id": 1 00:25:57.816 } 00:25:57.816 00:25:57.816 [2024-07-15 23:29:13.103923] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:57.816 INFO: response: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "id": 1, 00:25:57.816 "result": true 00:25:57.816 } 00:25:57.816 00:25:57.816 INFO: response: 00:25:57.816 { 00:25:57.816 "jsonrpc": "2.0", 00:25:57.816 "id": 1, 00:25:57.816 "result": true 00:25:57.816 } 00:25:57.816 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.816 23:29:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.816 INFO: Setting log level to 40 00:25:57.816 INFO: Setting log level to 40 00:25:57.816 INFO: Setting log level to 40 00:25:57.816 [2024-07-15 23:29:13.113875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.816 23:29:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.816 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:58.073 23:29:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:25:58.073 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.073 23:29:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.345 Nvme0n1 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.345 23:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.345 23:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.345 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.346 23:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.346 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.346 23:29:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.346 [2024-07-15 23:29:16.003225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.346 [ 00:26:01.346 { 00:26:01.346 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:01.346 "subtype": "Discovery", 00:26:01.346 "listen_addresses": [], 00:26:01.346 "allow_any_host": true, 00:26:01.346 "hosts": [] 00:26:01.346 }, 00:26:01.346 { 00:26:01.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.346 "subtype": "NVMe", 00:26:01.346 "listen_addresses": [ 00:26:01.346 { 00:26:01.346 "trtype": "TCP", 00:26:01.346 "adrfam": "IPv4", 00:26:01.346 "traddr": "10.0.0.2", 00:26:01.346 "trsvcid": "4420" 00:26:01.346 } 00:26:01.346 ], 00:26:01.346 "allow_any_host": true, 00:26:01.346 "hosts": [], 00:26:01.346 "serial_number": "SPDK00000000000001", 00:26:01.346 "model_number": "SPDK bdev Controller", 00:26:01.346 "max_namespaces": 1, 00:26:01.346 "min_cntlid": 1, 00:26:01.346 "max_cntlid": 65519, 00:26:01.346 "namespaces": [ 00:26:01.346 { 00:26:01.346 "nsid": 1, 00:26:01.346 "bdev_name": "Nvme0n1", 00:26:01.346 "name": "Nvme0n1", 00:26:01.346 "nguid": "B4C04CA18CDE404D9A89CDE1D5078988", 00:26:01.346 "uuid": "b4c04ca1-8cde-404d-9a89-cde1d5078988" 00:26:01.346 } 00:26:01.346 ] 00:26:01.346 } 00:26:01.346 ] 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:01.346 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:01.346 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:01.346 23:29:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.346 rmmod nvme_tcp 00:26:01.346 rmmod nvme_fabrics 00:26:01.346 rmmod nvme_keyring 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2452564 ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2452564 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2452564 ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2452564 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2452564 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2452564' 00:26:01.346 killing process with pid 2452564 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2452564 00:26:01.346 23:29:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2452564 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.239 23:29:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.239 23:29:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:03.239 23:29:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.136 23:29:20 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.137 00:26:05.137 real 0m18.219s 00:26:05.137 user 0m27.075s 00:26:05.137 sys 0m2.385s 00:26:05.137 23:29:20 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.137 23:29:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:05.137 ************************************ 00:26:05.137 END TEST nvmf_identify_passthru 00:26:05.137 ************************************ 00:26:05.137 23:29:20 -- common/autotest_common.sh@1142 -- # return 0 00:26:05.137 23:29:20 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:05.137 23:29:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:05.137 23:29:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.137 23:29:20 -- common/autotest_common.sh@10 -- # set +x 00:26:05.137 ************************************ 00:26:05.137 START TEST nvmf_dif 00:26:05.137 ************************************ 00:26:05.137 23:29:20 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:05.137 * Looking for test storage... 00:26:05.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.137 23:29:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.137 23:29:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.137 23:29:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.137 23:29:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.137 23:29:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.137 23:29:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.137 23:29:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:05.137 23:29:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:05.137 23:29:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.137 23:29:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.137 23:29:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.137 23:29:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.137 23:29:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.035 23:29:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:07.036 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:07.036 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:07.036 Found net devices under 0000:84:00.0: cvl_0_0 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:07.036 Found net devices under 0000:84:00.1: cvl_0_1 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:26:07.036 00:26:07.036 --- 10.0.0.2 ping statistics --- 00:26:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.036 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:07.036 23:29:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:26:07.296 00:26:07.296 --- 10.0.0.1 ping statistics --- 00:26:07.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.296 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:07.296 23:29:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.296 23:29:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:07.296 23:29:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:07.296 23:29:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:08.228 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:08.228 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:08.228 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:08.228 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:08.228 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:08.228 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:08.228 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:08.228 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:08.228 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:08.228 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:08.228 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:08.228 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:08.228 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:08.228 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:08.228 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:08.228 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:08.228 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:08.485 23:29:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:08.485 23:29:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2455779 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:08.485 23:29:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2455779 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2455779 ']' 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.485 23:29:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.485 [2024-07-15 23:29:23.647509] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:26:08.485 [2024-07-15 23:29:23.647585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.485 [2024-07-15 23:29:23.715910] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.742 [2024-07-15 23:29:23.832966] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.742 [2024-07-15 23:29:23.833027] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.742 [2024-07-15 23:29:23.833053] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.742 [2024-07-15 23:29:23.833066] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.742 [2024-07-15 23:29:23.833078] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.742 [2024-07-15 23:29:23.833115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:09.308 23:29:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:09.308 23:29:24 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.308 23:29:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:09.308 23:29:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:09.308 [2024-07-15 23:29:24.606556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.308 23:29:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.308 23:29:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:09.566 ************************************ 00:26:09.566 START TEST fio_dif_1_default 00:26:09.566 ************************************ 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.566 bdev_null0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.566 [2024-07-15 23:29:24.662864] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:09.566 { 00:26:09.566 "params": { 00:26:09.566 "name": "Nvme$subsystem", 00:26:09.566 "trtype": "$TEST_TRANSPORT", 00:26:09.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.566 "adrfam": "ipv4", 00:26:09.566 "trsvcid": "$NVMF_PORT", 00:26:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.566 "hdgst": ${hdgst:-false}, 00:26:09.566 "ddgst": ${ddgst:-false} 00:26:09.566 }, 00:26:09.566 "method": "bdev_nvme_attach_controller" 00:26:09.566 } 00:26:09.566 EOF 00:26:09.566 )") 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:09.566 "params": { 00:26:09.566 "name": "Nvme0", 00:26:09.566 "trtype": "tcp", 00:26:09.566 "traddr": "10.0.0.2", 00:26:09.566 "adrfam": "ipv4", 00:26:09.566 "trsvcid": "4420", 00:26:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:09.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:09.566 "hdgst": false, 00:26:09.566 "ddgst": false 00:26:09.566 }, 00:26:09.566 "method": "bdev_nvme_attach_controller" 00:26:09.566 }' 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:09.566 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:09.567 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:09.567 23:29:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.824 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:09.824 fio-3.35 00:26:09.824 Starting 1 thread 00:26:09.824 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.012 00:26:22.012 filename0: (groupid=0, jobs=1): err= 0: pid=2456082: Mon Jul 15 23:29:35 2024 00:26:22.012 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10029msec) 00:26:22.012 slat (nsec): min=4997, max=45606, avg=9664.81, stdev=4483.18 00:26:22.012 clat (usec): min=601, max=44472, avg=21082.93, stdev=20273.42 00:26:22.012 lat (usec): min=609, max=44488, avg=21092.59, stdev=20272.96 00:26:22.012 clat percentiles (usec): 00:26:22.012 | 1.00th=[ 627], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 725], 00:26:22.012 | 30.00th=[ 783], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:26:22.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:26:22.012 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:26:22.012 | 99.99th=[44303] 00:26:22.012 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:26:22.012 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:26:22.012 lat (usec) : 750=23.89%, 1000=25.26% 00:26:22.012 lat (msec) : 2=0.74%, 50=50.11% 00:26:22.012 cpu : usr=89.04%, sys=10.70%, ctx=14, majf=0, minf=224 00:26:22.012 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.012 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.012 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:22.012 00:26:22.012 Run status group 0 (all jobs): 00:26:22.012 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10029-10029msec 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 00:26:22.012 real 0m11.190s 00:26:22.012 user 0m10.140s 00:26:22.012 sys 0m1.331s 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 ************************************ 00:26:22.012 END TEST fio_dif_1_default 00:26:22.012 ************************************ 00:26:22.012 23:29:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:22.012 23:29:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:22.012 23:29:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:22.012 23:29:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 ************************************ 00:26:22.012 START TEST fio_dif_1_multi_subsystems 00:26:22.012 ************************************ 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 bdev_null0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 [2024-07-15 23:29:35.900970] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 bdev_null1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.012 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.013 { 00:26:22.013 "params": { 00:26:22.013 "name": "Nvme$subsystem", 00:26:22.013 "trtype": "$TEST_TRANSPORT", 00:26:22.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.013 "adrfam": "ipv4", 00:26:22.013 "trsvcid": "$NVMF_PORT", 00:26:22.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.013 "hdgst": ${hdgst:-false}, 00:26:22.013 "ddgst": ${ddgst:-false} 00:26:22.013 }, 00:26:22.013 "method": "bdev_nvme_attach_controller" 00:26:22.013 } 00:26:22.013 EOF 00:26:22.013 )") 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.013 { 00:26:22.013 "params": { 00:26:22.013 "name": "Nvme$subsystem", 00:26:22.013 "trtype": "$TEST_TRANSPORT", 00:26:22.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.013 "adrfam": "ipv4", 00:26:22.013 "trsvcid": "$NVMF_PORT", 00:26:22.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.013 "hdgst": ${hdgst:-false}, 00:26:22.013 "ddgst": ${ddgst:-false} 00:26:22.013 }, 00:26:22.013 "method": "bdev_nvme_attach_controller" 00:26:22.013 } 00:26:22.013 EOF 00:26:22.013 )") 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.013 "params": { 00:26:22.013 "name": "Nvme0", 00:26:22.013 "trtype": "tcp", 00:26:22.013 "traddr": "10.0.0.2", 00:26:22.013 "adrfam": "ipv4", 00:26:22.013 "trsvcid": "4420", 00:26:22.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.013 "hdgst": false, 00:26:22.013 "ddgst": false 00:26:22.013 }, 00:26:22.013 "method": "bdev_nvme_attach_controller" 00:26:22.013 },{ 00:26:22.013 "params": { 00:26:22.013 "name": "Nvme1", 00:26:22.013 "trtype": "tcp", 00:26:22.013 "traddr": "10.0.0.2", 00:26:22.013 "adrfam": "ipv4", 00:26:22.013 "trsvcid": "4420", 00:26:22.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.013 "hdgst": false, 00:26:22.013 "ddgst": false 00:26:22.013 }, 00:26:22.013 "method": "bdev_nvme_attach_controller" 00:26:22.013 }' 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.013 23:29:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.013 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:22.013 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:22.013 fio-3.35 00:26:22.013 Starting 2 threads 00:26:22.013 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.994 00:26:31.994 filename0: (groupid=0, jobs=1): err= 0: pid=2457493: Mon Jul 15 23:29:47 2024 00:26:31.994 read: IOPS=186, BW=747KiB/s (765kB/s)(7504KiB/10042msec) 00:26:31.994 slat (nsec): min=7233, max=45058, avg=10015.47, stdev=3117.07 00:26:31.994 clat (usec): min=599, max=44676, avg=21380.57, stdev=20597.39 00:26:31.994 lat (usec): min=607, max=44721, avg=21390.58, stdev=20597.15 00:26:31.994 clat percentiles (usec): 00:26:31.994 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 709], 00:26:31.994 | 30.00th=[ 734], 40.00th=[ 799], 50.00th=[41157], 60.00th=[41157], 00:26:31.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:31.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:26:31.994 | 99.99th=[44827] 00:26:31.994 bw ( KiB/s): min= 704, max= 768, per=50.16%, avg=748.80, stdev=28.24, samples=20 00:26:31.994 iops : min= 176, max= 192, avg=187.20, stdev= 7.06, samples=20 00:26:31.994 lat (usec) : 750=32.78%, 1000=16.63% 00:26:31.994 lat (msec) : 2=0.48%, 50=50.11% 00:26:31.994 cpu : usr=93.95%, sys=5.71%, ctx=23, majf=0, minf=139 00:26:31.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.994 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:31.994 filename1: (groupid=0, jobs=1): err= 0: pid=2457494: Mon Jul 15 23:29:47 2024 00:26:31.994 read: IOPS=186, BW=745KiB/s (763kB/s)(7472KiB/10030msec) 00:26:31.994 slat (nsec): min=6856, max=44994, avg=10076.62, stdev=3231.68 00:26:31.994 clat (usec): min=589, max=45648, avg=21446.53, stdev=20549.78 00:26:31.994 lat (usec): min=597, max=45693, avg=21456.61, stdev=20549.42 00:26:31.994 clat percentiles (usec): 00:26:31.994 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 709], 00:26:31.994 | 30.00th=[ 734], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:26:31.994 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:31.994 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:26:31.994 | 99.99th=[45876] 00:26:31.994 bw ( KiB/s): min= 704, max= 768, per=49.96%, avg=745.60, stdev=29.55, samples=20 00:26:31.994 iops : min= 176, max= 192, avg=186.40, stdev= 7.39, samples=20 00:26:31.994 lat (usec) : 750=31.75%, 1000=11.35% 00:26:31.994 lat (msec) : 2=6.58%, 50=50.32% 00:26:31.994 cpu : usr=94.15%, sys=5.56%, ctx=11, majf=0, minf=107 00:26:31.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.994 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:31.994 00:26:31.994 Run status group 0 (all jobs): 00:26:31.994 READ: bw=1491KiB/s (1527kB/s), 745KiB/s-747KiB/s (763kB/s-765kB/s), io=14.6MiB (15.3MB), run=10030-10042msec 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 00:26:32.256 real 0m11.635s 00:26:32.256 user 0m20.422s 00:26:32.256 sys 0m1.456s 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 ************************************ 00:26:32.256 END TEST fio_dif_1_multi_subsystems 00:26:32.256 ************************************ 00:26:32.256 23:29:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:32.256 23:29:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:32.256 23:29:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:32.256 23:29:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 ************************************ 00:26:32.256 START TEST fio_dif_rand_params 00:26:32.256 ************************************ 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 bdev_null0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.256 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.518 [2024-07-15 23:29:47.580546] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.518 { 00:26:32.518 "params": { 00:26:32.518 "name": "Nvme$subsystem", 00:26:32.518 "trtype": "$TEST_TRANSPORT", 00:26:32.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.518 "adrfam": "ipv4", 00:26:32.518 "trsvcid": "$NVMF_PORT", 00:26:32.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.518 "hdgst": ${hdgst:-false}, 00:26:32.518 "ddgst": ${ddgst:-false} 00:26:32.518 }, 00:26:32.518 "method": "bdev_nvme_attach_controller" 00:26:32.518 } 00:26:32.518 EOF 00:26:32.518 )") 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:32.518 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:32.519 "params": { 00:26:32.519 "name": "Nvme0", 00:26:32.519 "trtype": "tcp", 00:26:32.519 "traddr": "10.0.0.2", 00:26:32.519 "adrfam": "ipv4", 00:26:32.519 "trsvcid": "4420", 00:26:32.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.519 "hdgst": false, 00:26:32.519 "ddgst": false 00:26:32.519 }, 00:26:32.519 "method": "bdev_nvme_attach_controller" 00:26:32.519 }' 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:32.519 23:29:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.776 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:32.776 ... 00:26:32.776 fio-3.35 00:26:32.776 Starting 3 threads 00:26:32.776 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.322 00:26:39.322 filename0: (groupid=0, jobs=1): err= 0: pid=2458893: Mon Jul 15 23:29:53 2024 00:26:39.322 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(128MiB/5047msec) 00:26:39.322 slat (nsec): min=4111, max=38286, avg=13408.28, stdev=3470.09 00:26:39.322 clat (usec): min=4666, max=54598, avg=14755.81, stdev=12290.02 00:26:39.322 lat (usec): min=4678, max=54610, avg=14769.22, stdev=12290.10 00:26:39.322 clat percentiles (usec): 00:26:39.322 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 7635], 20.00th=[ 8586], 00:26:39.322 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[11338], 60.00th=[12125], 00:26:39.322 | 70.00th=[12911], 80.00th=[14222], 90.00th=[18220], 95.00th=[51119], 00:26:39.322 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:26:39.322 | 99.99th=[54789] 00:26:39.322 bw ( KiB/s): min=19456, max=31232, per=31.74%, avg=26092.20, stdev=3572.69, samples=10 00:26:39.322 iops : min= 152, max= 244, avg=203.80, stdev=27.86, samples=10 00:26:39.322 lat (msec) : 10=38.26%, 20=51.86%, 50=3.03%, 100=6.85% 00:26:39.322 cpu : usr=90.17%, sys=9.37%, ctx=12, majf=0, minf=151 00:26:39.322 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.322 issued rwts: total=1022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.322 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:39.322 filename0: (groupid=0, jobs=1): err= 0: pid=2458894: Mon Jul 15 23:29:53 2024 00:26:39.322 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5007msec) 00:26:39.322 slat (nsec): min=4328, max=42105, avg=12892.68, stdev=3562.65 00:26:39.322 clat (usec): min=4573, max=56611, avg=12951.50, stdev=10808.98 00:26:39.322 lat (usec): min=4586, max=56624, avg=12964.39, stdev=10809.11 00:26:39.322 clat percentiles (usec): 00:26:39.322 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7963], 00:26:39.322 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11076], 00:26:39.322 | 70.00th=[11863], 80.00th=[12780], 90.00th=[15270], 95.00th=[49546], 00:26:39.322 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[56361], 00:26:39.322 | 99.99th=[56361] 00:26:39.322 bw ( KiB/s): min=22528, max=35328, per=35.97%, avg=29568.00, stdev=4703.79, samples=10 00:26:39.322 iops : min= 176, max= 276, avg=231.00, stdev=36.75, samples=10 00:26:39.322 lat (msec) : 10=47.58%, 20=44.91%, 50=3.37%, 100=4.15% 00:26:39.322 cpu : usr=90.45%, sys=9.11%, ctx=8, majf=0, minf=113 00:26:39.322 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.322 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:39.323 filename0: (groupid=0, jobs=1): err= 0: pid=2458895: Mon Jul 15 23:29:53 2024 00:26:39.323 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(133MiB/5005msec) 00:26:39.323 slat (nsec): min=4695, max=61652, avg=13379.34, stdev=3596.82 00:26:39.323 clat (usec): min=4567, max=91853, avg=14130.51, stdev=12240.03 00:26:39.323 lat (usec): min=4579, max=91867, avg=14143.89, stdev=12240.05 00:26:39.323 clat percentiles (usec): 00:26:39.323 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 8356], 00:26:39.323 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[11731], 00:26:39.323 | 70.00th=[12518], 80.00th=[13566], 90.00th=[16319], 95.00th=[50594], 00:26:39.323 | 99.00th=[54264], 99.50th=[54789], 99.90th=[89654], 99.95th=[91751], 00:26:39.323 | 99.99th=[91751] 00:26:39.323 bw ( KiB/s): min=18432, max=38912, per=32.95%, avg=27084.80, stdev=6271.62, samples=10 00:26:39.323 iops : min= 144, max= 304, avg=211.60, stdev=49.00, samples=10 00:26:39.323 lat (msec) : 10=41.38%, 20=49.76%, 50=2.92%, 100=5.94% 00:26:39.323 cpu : usr=91.11%, sys=8.41%, ctx=11, majf=0, minf=127 00:26:39.323 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.323 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:39.323 00:26:39.323 Run status group 0 (all jobs): 00:26:39.323 READ: bw=80.3MiB/s (84.2MB/s), 25.3MiB/s-28.9MiB/s (26.5MB/s-30.3MB/s), io=405MiB (425MB), run=5005-5047msec 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 bdev_null0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 [2024-07-15 23:29:53.734936] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 bdev_null1 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 bdev_null2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.323 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.324 { 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme$subsystem", 00:26:39.324 "trtype": "$TEST_TRANSPORT", 00:26:39.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "$NVMF_PORT", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.324 "hdgst": ${hdgst:-false}, 00:26:39.324 "ddgst": ${ddgst:-false} 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 } 00:26:39.324 EOF 00:26:39.324 )") 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.324 { 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme$subsystem", 00:26:39.324 "trtype": "$TEST_TRANSPORT", 00:26:39.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "$NVMF_PORT", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.324 "hdgst": ${hdgst:-false}, 00:26:39.324 "ddgst": ${ddgst:-false} 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 } 00:26:39.324 EOF 00:26:39.324 )") 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.324 { 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme$subsystem", 00:26:39.324 "trtype": "$TEST_TRANSPORT", 00:26:39.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "$NVMF_PORT", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.324 "hdgst": ${hdgst:-false}, 00:26:39.324 "ddgst": ${ddgst:-false} 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 } 00:26:39.324 EOF 00:26:39.324 )") 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme0", 00:26:39.324 "trtype": "tcp", 00:26:39.324 "traddr": "10.0.0.2", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "4420", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:39.324 "hdgst": false, 00:26:39.324 "ddgst": false 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 },{ 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme1", 00:26:39.324 "trtype": "tcp", 00:26:39.324 "traddr": "10.0.0.2", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "4420", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.324 "hdgst": false, 00:26:39.324 "ddgst": false 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 },{ 00:26:39.324 "params": { 00:26:39.324 "name": "Nvme2", 00:26:39.324 "trtype": "tcp", 00:26:39.324 "traddr": "10.0.0.2", 00:26:39.324 "adrfam": "ipv4", 00:26:39.324 "trsvcid": "4420", 00:26:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:39.324 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:39.324 "hdgst": false, 00:26:39.324 "ddgst": false 00:26:39.324 }, 00:26:39.324 "method": "bdev_nvme_attach_controller" 00:26:39.324 }' 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:39.324 23:29:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.324 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:39.324 ... 00:26:39.324 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:39.324 ... 00:26:39.324 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:39.324 ... 00:26:39.324 fio-3.35 00:26:39.324 Starting 24 threads 00:26:39.324 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.518 00:26:51.518 filename0: (groupid=0, jobs=1): err= 0: pid=2459759: Mon Jul 15 23:30:05 2024 00:26:51.518 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10008msec) 00:26:51.518 slat (usec): min=8, max=119, avg=30.27, stdev=21.70 00:26:51.518 clat (usec): min=8849, max=69400, avg=33169.91, stdev=3148.72 00:26:51.518 lat (usec): min=8860, max=69430, avg=33200.19, stdev=3149.12 00:26:51.518 clat percentiles (usec): 00:26:51.518 | 1.00th=[22676], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:51.518 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:51.518 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:26:51.518 | 99.00th=[41681], 99.50th=[49021], 99.90th=[69731], 99.95th=[69731], 00:26:51.518 | 99.99th=[69731] 00:26:51.518 bw ( KiB/s): min= 1664, max= 2000, per=4.16%, avg=1911.58, stdev=66.05, samples=19 00:26:51.518 iops : min= 416, max= 500, avg=477.89, stdev=16.51, samples=19 00:26:51.518 lat (msec) : 10=0.12%, 20=0.50%, 50=99.04%, 100=0.33% 00:26:51.518 cpu : usr=95.74%, sys=2.65%, ctx=86, majf=0, minf=28 00:26:51.518 IO depths : 1=0.3%, 2=3.6%, 4=13.3%, 8=67.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:26:51.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 complete : 0=0.0%, 4=92.0%, 8=5.0%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 issued rwts: total=4802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.518 filename0: (groupid=0, jobs=1): err= 0: pid=2459760: Mon Jul 15 23:30:05 2024 00:26:51.518 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:26:51.518 slat (usec): min=7, max=119, avg=32.88, stdev=26.65 00:26:51.518 clat (usec): min=8061, max=49216, avg=33053.69, stdev=1688.24 00:26:51.518 lat (usec): min=8106, max=49274, avg=33086.58, stdev=1682.19 00:26:51.518 clat percentiles (usec): 00:26:51.518 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:26:51.518 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:51.518 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.518 | 99.00th=[36439], 99.50th=[36963], 99.90th=[44827], 99.95th=[45351], 00:26:51.518 | 99.99th=[49021] 00:26:51.518 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1920.00, stdev=42.67, samples=19 00:26:51.518 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:26:51.518 lat (msec) : 10=0.04%, 20=0.29%, 50=99.67% 00:26:51.518 cpu : usr=96.85%, sys=2.11%, ctx=61, majf=0, minf=26 00:26:51.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:51.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.518 filename0: (groupid=0, jobs=1): err= 0: pid=2459761: Mon Jul 15 23:30:05 2024 00:26:51.518 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10032msec) 00:26:51.518 slat (usec): min=6, max=116, avg=38.16, stdev=23.97 00:26:51.518 clat (usec): min=27718, max=55381, avg=33101.84, stdev=1505.42 00:26:51.518 lat (usec): min=27764, max=55400, avg=33140.00, stdev=1501.62 00:26:51.518 clat percentiles (usec): 00:26:51.518 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.518 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.518 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.518 | 99.00th=[36439], 99.50th=[36963], 99.90th=[55313], 99.95th=[55313], 00:26:51.518 | 99.99th=[55313] 00:26:51.518 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.518 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.518 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.518 cpu : usr=97.31%, sys=2.06%, ctx=110, majf=0, minf=24 00:26:51.518 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.518 issued rwts: total=4798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.518 filename0: (groupid=0, jobs=1): err= 0: pid=2459762: Mon Jul 15 23:30:05 2024 00:26:51.518 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:26:51.518 slat (usec): min=7, max=128, avg=46.10, stdev=22.74 00:26:51.518 clat (usec): min=21561, max=63017, avg=33063.10, stdev=2077.71 00:26:51.518 lat (usec): min=21572, max=63065, avg=33109.20, stdev=2074.66 00:26:51.518 clat percentiles (usec): 00:26:51.518 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:26:51.518 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.518 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[36439], 99.50th=[43254], 99.90th=[62653], 99.95th=[63177], 00:26:51.519 | 99.99th=[63177] 00:26:51.519 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1907.20, stdev=82.01, samples=20 00:26:51.519 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:26:51.519 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.519 cpu : usr=97.12%, sys=1.81%, ctx=196, majf=0, minf=15 00:26:51.519 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename0: (groupid=0, jobs=1): err= 0: pid=2459763: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:26:51.519 slat (usec): min=9, max=120, avg=47.52, stdev=26.58 00:26:51.519 clat (usec): min=31161, max=62883, avg=33055.71, stdev=1875.16 00:26:51.519 lat (usec): min=31227, max=62912, avg=33103.22, stdev=1871.51 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:51.519 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:51.519 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[36439], 99.50th=[36963], 99.90th=[62653], 99.95th=[62653], 00:26:51.519 | 99.99th=[62653] 00:26:51.519 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1907.20, stdev=82.01, samples=20 00:26:51.519 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:26:51.519 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.519 cpu : usr=96.40%, sys=2.21%, ctx=175, majf=0, minf=25 00:26:51.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename0: (groupid=0, jobs=1): err= 0: pid=2459764: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:26:51.519 slat (usec): min=8, max=126, avg=43.33, stdev=26.89 00:26:51.519 clat (usec): min=8365, max=54949, avg=32956.75, stdev=2260.77 00:26:51.519 lat (usec): min=8374, max=54983, avg=33000.08, stdev=2258.48 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[37487], 99.50th=[49021], 99.90th=[54789], 99.95th=[54789], 00:26:51.519 | 99.99th=[54789] 00:26:51.519 bw ( KiB/s): min= 1667, max= 2048, per=4.17%, avg=1913.42, stdev=66.49, samples=19 00:26:51.519 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:26:51.519 lat (msec) : 10=0.15%, 20=0.19%, 50=99.29%, 100=0.38% 00:26:51.519 cpu : usr=96.02%, sys=2.46%, ctx=252, majf=0, minf=22 00:26:51.519 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename0: (groupid=0, jobs=1): err= 0: pid=2459765: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10018msec) 00:26:51.519 slat (usec): min=8, max=116, avg=41.59, stdev=21.19 00:26:51.519 clat (usec): min=17486, max=52112, avg=33057.62, stdev=1464.02 00:26:51.519 lat (usec): min=17516, max=52144, avg=33099.21, stdev=1460.15 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[35914], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:26:51.519 | 99.99th=[52167] 00:26:51.519 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.519 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.519 lat (msec) : 20=0.15%, 50=99.52%, 100=0.33% 00:26:51.519 cpu : usr=97.98%, sys=1.60%, ctx=39, majf=0, minf=27 00:26:51.519 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename0: (groupid=0, jobs=1): err= 0: pid=2459766: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10010msec) 00:26:51.519 slat (usec): min=7, max=118, avg=33.58, stdev=17.99 00:26:51.519 clat (usec): min=20934, max=67718, avg=33165.72, stdev=2223.34 00:26:51.519 lat (usec): min=20951, max=67736, avg=33199.30, stdev=2222.07 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[36963], 99.50th=[38011], 99.90th=[67634], 99.95th=[67634], 00:26:51.519 | 99.99th=[67634] 00:26:51.519 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1906.53, stdev=84.20, samples=19 00:26:51.519 iops : min= 416, max= 512, avg=476.63, stdev=21.05, samples=19 00:26:51.519 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.519 cpu : usr=97.45%, sys=1.71%, ctx=148, majf=0, minf=18 00:26:51.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename1: (groupid=0, jobs=1): err= 0: pid=2459767: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10009msec) 00:26:51.519 slat (usec): min=8, max=115, avg=31.06, stdev=13.03 00:26:51.519 clat (usec): min=11063, max=71260, avg=33046.30, stdev=2993.65 00:26:51.519 lat (usec): min=11072, max=71287, avg=33077.36, stdev=2994.22 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[23462], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:26:51.519 | 99.00th=[37487], 99.50th=[45876], 99.90th=[70779], 99.95th=[70779], 00:26:51.519 | 99.99th=[70779] 00:26:51.519 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1906.53, stdev=72.59, samples=19 00:26:51.519 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:26:51.519 lat (msec) : 20=0.42%, 50=99.25%, 100=0.33% 00:26:51.519 cpu : usr=97.91%, sys=1.50%, ctx=34, majf=0, minf=21 00:26:51.519 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename1: (groupid=0, jobs=1): err= 0: pid=2459768: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:26:51.519 slat (usec): min=8, max=110, avg=36.10, stdev=24.12 00:26:51.519 clat (usec): min=21549, max=44827, avg=33063.57, stdev=1146.69 00:26:51.519 lat (usec): min=21560, max=44855, avg=33099.67, stdev=1142.55 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:26:51.519 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[36439], 99.50th=[36963], 99.90th=[42730], 99.95th=[44303], 00:26:51.519 | 99.99th=[44827] 00:26:51.519 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.519 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.519 lat (msec) : 50=100.00% 00:26:51.519 cpu : usr=96.35%, sys=2.31%, ctx=106, majf=0, minf=27 00:26:51.519 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename1: (groupid=0, jobs=1): err= 0: pid=2459769: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:26:51.519 slat (usec): min=8, max=129, avg=45.40, stdev=27.96 00:26:51.519 clat (usec): min=19730, max=70800, avg=33096.24, stdev=2575.97 00:26:51.519 lat (usec): min=19777, max=70834, avg=33141.64, stdev=2572.76 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.519 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.519 | 99.00th=[37487], 99.50th=[45351], 99.90th=[70779], 99.95th=[70779], 00:26:51.519 | 99.99th=[70779] 00:26:51.519 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:26:51.519 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:26:51.519 lat (msec) : 20=0.04%, 50=99.62%, 100=0.33% 00:26:51.519 cpu : usr=96.00%, sys=2.43%, ctx=99, majf=0, minf=23 00:26:51.519 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.519 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.519 filename1: (groupid=0, jobs=1): err= 0: pid=2459770: Mon Jul 15 23:30:05 2024 00:26:51.519 read: IOPS=478, BW=1913KiB/s (1958kB/s)(18.7MiB/10018msec) 00:26:51.519 slat (usec): min=10, max=114, avg=41.75, stdev=20.24 00:26:51.519 clat (usec): min=17499, max=52168, avg=33052.12, stdev=1441.86 00:26:51.519 lat (usec): min=17521, max=52188, avg=33093.87, stdev=1438.86 00:26:51.519 clat percentiles (usec): 00:26:51.519 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:26:51.519 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[35914], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:26:51.520 | 99.99th=[52167] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.520 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.520 lat (msec) : 20=0.13%, 50=99.54%, 100=0.33% 00:26:51.520 cpu : usr=93.05%, sys=3.71%, ctx=296, majf=0, minf=19 00:26:51.520 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename1: (groupid=0, jobs=1): err= 0: pid=2459771: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:26:51.520 slat (usec): min=8, max=110, avg=34.99, stdev=16.52 00:26:51.520 clat (usec): min=8140, max=69106, avg=33022.66, stdev=2758.36 00:26:51.520 lat (usec): min=8153, max=69141, avg=33057.65, stdev=2759.16 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.520 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[35914], 99.50th=[36439], 99.90th=[68682], 99.95th=[68682], 00:26:51.520 | 99.99th=[68682] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1906.53, stdev=84.20, samples=19 00:26:51.520 iops : min= 416, max= 512, avg=476.63, stdev=21.05, samples=19 00:26:51.520 lat (msec) : 10=0.33%, 20=0.33%, 50=99.00%, 100=0.33% 00:26:51.520 cpu : usr=97.74%, sys=1.79%, ctx=19, majf=0, minf=25 00:26:51.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename1: (groupid=0, jobs=1): err= 0: pid=2459772: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:26:51.520 slat (usec): min=8, max=113, avg=40.74, stdev=24.38 00:26:51.520 clat (usec): min=20115, max=60027, avg=33080.17, stdev=1962.81 00:26:51.520 lat (usec): min=20133, max=60058, avg=33120.91, stdev=1959.92 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:26:51.520 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[36963], 99.50th=[45351], 99.90th=[60031], 99.95th=[60031], 00:26:51.520 | 99.99th=[60031] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.26, stdev=67.11, samples=19 00:26:51.520 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:26:51.520 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.520 cpu : usr=92.83%, sys=3.75%, ctx=521, majf=0, minf=20 00:26:51.520 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename1: (groupid=0, jobs=1): err= 0: pid=2459773: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:26:51.520 slat (nsec): min=9542, max=63449, avg=31507.58, stdev=8270.17 00:26:51.520 clat (usec): min=21841, max=61370, avg=33189.79, stdev=1828.40 00:26:51.520 lat (usec): min=21881, max=61402, avg=33221.30, stdev=1827.67 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:51.520 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[36439], 99.50th=[36963], 99.90th=[61080], 99.95th=[61080], 00:26:51.520 | 99.99th=[61604] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1907.20, stdev=70.72, samples=20 00:26:51.520 iops : min= 416, max= 512, avg=476.80, stdev=17.68, samples=20 00:26:51.520 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.520 cpu : usr=97.20%, sys=1.83%, ctx=212, majf=0, minf=26 00:26:51.520 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename1: (groupid=0, jobs=1): err= 0: pid=2459774: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10027msec) 00:26:51.520 slat (usec): min=8, max=155, avg=37.52, stdev=28.97 00:26:51.520 clat (usec): min=13494, max=44450, avg=32981.88, stdev=1092.43 00:26:51.520 lat (usec): min=13526, max=44507, avg=33019.40, stdev=1087.14 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:26:51.520 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:26:51.520 | 99.99th=[44303] 00:26:51.520 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1920.00, stdev=41.53, samples=20 00:26:51.520 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:26:51.520 lat (msec) : 20=0.04%, 50=99.96% 00:26:51.520 cpu : usr=93.64%, sys=3.48%, ctx=221, majf=0, minf=24 00:26:51.520 IO depths : 1=6.1%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename2: (groupid=0, jobs=1): err= 0: pid=2459775: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10026msec) 00:26:51.520 slat (usec): min=8, max=120, avg=30.99, stdev=21.87 00:26:51.520 clat (usec): min=12423, max=44640, avg=33041.18, stdev=1320.47 00:26:51.520 lat (usec): min=12443, max=44697, avg=33072.16, stdev=1318.78 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.520 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[35914], 99.50th=[36439], 99.90th=[44303], 99.95th=[44303], 00:26:51.520 | 99.99th=[44827] 00:26:51.520 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1920.00, stdev=41.53, samples=20 00:26:51.520 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:26:51.520 lat (msec) : 20=0.19%, 50=99.81% 00:26:51.520 cpu : usr=97.10%, sys=2.01%, ctx=77, majf=0, minf=28 00:26:51.520 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename2: (groupid=0, jobs=1): err= 0: pid=2459776: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:26:51.520 slat (nsec): min=6662, max=61448, avg=27019.39, stdev=10076.03 00:26:51.520 clat (usec): min=20337, max=60188, avg=33200.10, stdev=2148.44 00:26:51.520 lat (usec): min=20363, max=60220, avg=33227.12, stdev=2148.17 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:51.520 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[38011], 99.50th=[45876], 99.90th=[60031], 99.95th=[60031], 00:26:51.520 | 99.99th=[60031] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.26, stdev=67.32, samples=19 00:26:51.520 iops : min= 416, max= 512, avg=478.32, stdev=16.83, samples=19 00:26:51.520 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.520 cpu : usr=92.15%, sys=4.24%, ctx=437, majf=0, minf=27 00:26:51.520 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename2: (groupid=0, jobs=1): err= 0: pid=2459777: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:26:51.520 slat (usec): min=7, max=140, avg=26.48, stdev=10.60 00:26:51.520 clat (usec): min=18889, max=76677, avg=33226.74, stdev=2458.07 00:26:51.520 lat (usec): min=18902, max=76695, avg=33253.22, stdev=2457.92 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[28443], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:51.520 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[44303], 99.50th=[45876], 99.90th=[61604], 99.95th=[61604], 00:26:51.520 | 99.99th=[77071] 00:26:51.520 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1906.53, stdev=72.98, samples=19 00:26:51.520 iops : min= 416, max= 512, avg=476.63, stdev=18.25, samples=19 00:26:51.520 lat (msec) : 20=0.17%, 50=99.50%, 100=0.33% 00:26:51.520 cpu : usr=97.16%, sys=1.97%, ctx=122, majf=0, minf=32 00:26:51.520 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:51.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.520 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.520 filename2: (groupid=0, jobs=1): err= 0: pid=2459778: Mon Jul 15 23:30:05 2024 00:26:51.520 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10018msec) 00:26:51.520 slat (usec): min=8, max=108, avg=38.14, stdev=20.58 00:26:51.520 clat (usec): min=17546, max=52136, avg=33081.88, stdev=1518.86 00:26:51.520 lat (usec): min=17570, max=52172, avg=33120.01, stdev=1515.73 00:26:51.520 clat percentiles (usec): 00:26:51.520 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:26:51.520 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.520 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.520 | 99.00th=[35914], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:26:51.521 | 99.99th=[52167] 00:26:51.521 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.521 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.521 lat (msec) : 20=0.23%, 50=99.44%, 100=0.33% 00:26:51.521 cpu : usr=97.28%, sys=1.84%, ctx=58, majf=0, minf=20 00:26:51.521 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 issued rwts: total=4795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.521 filename2: (groupid=0, jobs=1): err= 0: pid=2459779: Mon Jul 15 23:30:05 2024 00:26:51.521 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:26:51.521 slat (nsec): min=10556, max=73098, avg=31251.26, stdev=8777.76 00:26:51.521 clat (usec): min=31617, max=61350, avg=33194.63, stdev=1769.47 00:26:51.521 lat (usec): min=31641, max=61383, avg=33225.88, stdev=1768.58 00:26:51.521 clat percentiles (usec): 00:26:51.521 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:51.521 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.521 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.521 | 99.00th=[36439], 99.50th=[36963], 99.90th=[61080], 99.95th=[61080], 00:26:51.521 | 99.99th=[61604] 00:26:51.521 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1907.20, stdev=70.72, samples=20 00:26:51.521 iops : min= 416, max= 512, avg=476.80, stdev=17.68, samples=20 00:26:51.521 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.521 cpu : usr=97.51%, sys=1.72%, ctx=136, majf=0, minf=25 00:26:51.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.521 filename2: (groupid=0, jobs=1): err= 0: pid=2459780: Mon Jul 15 23:30:05 2024 00:26:51.521 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.8MiB/10030msec) 00:26:51.521 slat (usec): min=8, max=103, avg=39.32, stdev=17.70 00:26:51.521 clat (usec): min=21909, max=53386, avg=33087.62, stdev=1424.28 00:26:51.521 lat (usec): min=21922, max=53463, avg=33126.93, stdev=1423.83 00:26:51.521 clat percentiles (usec): 00:26:51.521 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:26:51.521 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.521 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.521 | 99.00th=[36439], 99.50th=[36963], 99.90th=[53216], 99.95th=[53216], 00:26:51.521 | 99.99th=[53216] 00:26:51.521 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1913.60, stdev=65.33, samples=20 00:26:51.521 iops : min= 416, max= 512, avg=478.40, stdev=16.33, samples=20 00:26:51.521 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.521 cpu : usr=94.83%, sys=3.15%, ctx=265, majf=0, minf=25 00:26:51.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.521 filename2: (groupid=0, jobs=1): err= 0: pid=2459781: Mon Jul 15 23:30:05 2024 00:26:51.521 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:26:51.521 slat (usec): min=8, max=126, avg=34.26, stdev=14.35 00:26:51.521 clat (usec): min=23171, max=74094, avg=33163.50, stdev=1989.23 00:26:51.521 lat (usec): min=23182, max=74114, avg=33197.76, stdev=1988.59 00:26:51.521 clat percentiles (usec): 00:26:51.521 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:51.521 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.521 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:51.521 | 99.00th=[36439], 99.50th=[36963], 99.90th=[63177], 99.95th=[63177], 00:26:51.521 | 99.99th=[73925] 00:26:51.521 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1907.20, stdev=82.01, samples=20 00:26:51.521 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:26:51.521 lat (msec) : 50=99.67%, 100=0.33% 00:26:51.521 cpu : usr=97.31%, sys=1.82%, ctx=37, majf=0, minf=29 00:26:51.521 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.521 filename2: (groupid=0, jobs=1): err= 0: pid=2459782: Mon Jul 15 23:30:05 2024 00:26:51.521 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10008msec) 00:26:51.521 slat (nsec): min=8137, max=94588, avg=25998.70, stdev=11727.89 00:26:51.521 clat (usec): min=8166, max=69007, avg=32516.29, stdev=4255.64 00:26:51.521 lat (usec): min=8181, max=69037, avg=32542.29, stdev=4257.87 00:26:51.521 clat percentiles (usec): 00:26:51.521 | 1.00th=[17695], 5.00th=[23725], 10.00th=[28443], 20.00th=[32637], 00:26:51.521 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:51.521 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:26:51.521 | 99.00th=[44303], 99.50th=[45876], 99.90th=[68682], 99.95th=[68682], 00:26:51.521 | 99.99th=[68682] 00:26:51.521 bw ( KiB/s): min= 1648, max= 2224, per=4.24%, avg=1947.79, stdev=120.19, samples=19 00:26:51.521 iops : min= 412, max= 556, avg=486.95, stdev=30.05, samples=19 00:26:51.521 lat (msec) : 10=0.37%, 20=0.78%, 50=98.53%, 100=0.33% 00:26:51.521 cpu : usr=97.91%, sys=1.63%, ctx=35, majf=0, minf=30 00:26:51.521 IO depths : 1=0.3%, 2=4.8%, 4=18.3%, 8=63.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 complete : 0=0.0%, 4=92.8%, 8=2.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.521 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:51.521 00:26:51.521 Run status group 0 (all jobs): 00:26:51.521 READ: bw=44.8MiB/s (47.0MB/s), 1911KiB/s-1958KiB/s (1957kB/s-2005kB/s), io=450MiB (472MB), run=10002-10032msec 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 bdev_null0 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.521 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 [2024-07-15 23:30:05.589912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 bdev_null1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.522 { 00:26:51.522 "params": { 00:26:51.522 "name": "Nvme$subsystem", 00:26:51.522 "trtype": "$TEST_TRANSPORT", 00:26:51.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.522 "adrfam": "ipv4", 00:26:51.522 "trsvcid": "$NVMF_PORT", 00:26:51.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.522 "hdgst": ${hdgst:-false}, 00:26:51.522 "ddgst": ${ddgst:-false} 00:26:51.522 }, 00:26:51.522 "method": "bdev_nvme_attach_controller" 00:26:51.522 } 00:26:51.522 EOF 00:26:51.522 )") 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.522 { 00:26:51.522 "params": { 00:26:51.522 "name": "Nvme$subsystem", 00:26:51.522 "trtype": "$TEST_TRANSPORT", 00:26:51.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.522 "adrfam": "ipv4", 00:26:51.522 "trsvcid": "$NVMF_PORT", 00:26:51.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.522 "hdgst": ${hdgst:-false}, 00:26:51.522 "ddgst": ${ddgst:-false} 00:26:51.522 }, 00:26:51.522 "method": "bdev_nvme_attach_controller" 00:26:51.522 } 00:26:51.522 EOF 00:26:51.522 )") 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:51.522 "params": { 00:26:51.522 "name": "Nvme0", 00:26:51.522 "trtype": "tcp", 00:26:51.522 "traddr": "10.0.0.2", 00:26:51.522 "adrfam": "ipv4", 00:26:51.522 "trsvcid": "4420", 00:26:51.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:51.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:51.522 "hdgst": false, 00:26:51.522 "ddgst": false 00:26:51.522 }, 00:26:51.522 "method": "bdev_nvme_attach_controller" 00:26:51.522 },{ 00:26:51.522 "params": { 00:26:51.522 "name": "Nvme1", 00:26:51.522 "trtype": "tcp", 00:26:51.522 "traddr": "10.0.0.2", 00:26:51.522 "adrfam": "ipv4", 00:26:51.522 "trsvcid": "4420", 00:26:51.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:51.522 "hdgst": false, 00:26:51.522 "ddgst": false 00:26:51.522 }, 00:26:51.522 "method": "bdev_nvme_attach_controller" 00:26:51.522 }' 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:51.522 23:30:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:51.522 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:51.522 ... 00:26:51.522 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:51.522 ... 00:26:51.522 fio-3.35 00:26:51.522 Starting 4 threads 00:26:51.522 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.785 00:26:56.785 filename0: (groupid=0, jobs=1): err= 0: pid=2461274: Mon Jul 15 23:30:11 2024 00:26:56.785 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5003msec) 00:26:56.785 slat (nsec): min=3895, max=44494, avg=13759.97, stdev=4792.22 00:26:56.785 clat (usec): min=691, max=8841, avg=4147.19, stdev=593.38 00:26:56.785 lat (usec): min=704, max=8882, avg=4160.95, stdev=593.37 00:26:56.785 clat percentiles (usec): 00:26:56.785 | 1.00th=[ 2540], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3752], 00:26:56.785 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4113], 60.00th=[ 4228], 00:26:56.785 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4817], 00:26:56.785 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8586], 00:26:56.785 | 99.99th=[ 8848] 00:26:56.785 bw ( KiB/s): min=14416, max=16768, per=25.38%, avg=15249.60, stdev=762.40, samples=10 00:26:56.785 iops : min= 1802, max= 2096, avg=1906.20, stdev=95.30, samples=10 00:26:56.785 lat (usec) : 750=0.02%, 1000=0.03% 00:26:56.785 lat (msec) : 2=0.45%, 4=40.77%, 10=58.73% 00:26:56.785 cpu : usr=88.40%, sys=8.84%, ctx=275, majf=0, minf=9 00:26:56.785 IO depths : 1=0.5%, 2=15.2%, 4=57.9%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.785 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.785 issued rwts: total=9539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.785 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:56.785 filename0: (groupid=0, jobs=1): err= 0: pid=2461275: Mon Jul 15 23:30:11 2024 00:26:56.785 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5002msec) 00:26:56.785 slat (nsec): min=5202, max=55027, avg=14697.83, stdev=4155.16 00:26:56.785 clat (usec): min=764, max=8249, avg=4171.79, stdev=643.62 00:26:56.785 lat (usec): min=777, max=8266, avg=4186.49, stdev=643.61 00:26:56.785 clat percentiles (usec): 00:26:56.785 | 1.00th=[ 2024], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3752], 00:26:56.785 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4146], 60.00th=[ 4293], 00:26:56.785 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4948], 00:26:56.785 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 7963], 00:26:56.785 | 99.99th=[ 8225] 00:26:56.785 bw ( KiB/s): min=14192, max=16912, per=25.19%, avg=15135.80, stdev=768.89, samples=10 00:26:56.785 iops : min= 1774, max= 2114, avg=1891.90, stdev=96.09, samples=10 00:26:56.785 lat (usec) : 1000=0.17% 00:26:56.785 lat (msec) : 2=0.82%, 4=39.76%, 10=59.24% 00:26:56.785 cpu : usr=91.22%, sys=7.40%, ctx=19, majf=0, minf=9 00:26:56.785 IO depths : 1=0.2%, 2=17.0%, 4=56.0%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.785 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.785 issued rwts: total=9466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.785 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:56.785 filename1: (groupid=0, jobs=1): err= 0: pid=2461276: Mon Jul 15 23:30:11 2024 00:26:56.785 read: IOPS=1865, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5003msec) 00:26:56.785 slat (usec): min=5, max=217, avg=13.99, stdev= 4.31 00:26:56.785 clat (usec): min=633, max=8387, avg=4236.12, stdev=734.97 00:26:56.785 lat (usec): min=646, max=8402, avg=4250.12, stdev=734.74 00:26:56.785 clat percentiles (usec): 00:26:56.785 | 1.00th=[ 2114], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3752], 00:26:56.785 | 30.00th=[ 3884], 40.00th=[ 4015], 50.00th=[ 4178], 60.00th=[ 4359], 00:26:56.785 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5538], 00:26:56.785 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8029], 99.95th=[ 8160], 00:26:56.785 | 99.99th=[ 8356] 00:26:56.785 bw ( KiB/s): min=13344, max=16896, per=24.84%, avg=14923.00, stdev=1007.29, samples=10 00:26:56.785 iops : min= 1668, max= 2112, avg=1865.30, stdev=125.87, samples=10 00:26:56.785 lat (usec) : 750=0.01%, 1000=0.13% 00:26:56.785 lat (msec) : 2=0.79%, 4=38.09%, 10=60.98% 00:26:56.785 cpu : usr=93.56%, sys=5.66%, ctx=8, majf=0, minf=9 00:26:56.785 IO depths : 1=0.1%, 2=17.3%, 4=55.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.785 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.786 issued rwts: total=9333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:56.786 filename1: (groupid=0, jobs=1): err= 0: pid=2461277: Mon Jul 15 23:30:11 2024 00:26:56.786 read: IOPS=1891, BW=14.8MiB/s (15.5MB/s)(74.5MiB/5043msec) 00:26:56.786 slat (nsec): min=5634, max=38147, avg=13729.85, stdev=3745.05 00:26:56.786 clat (usec): min=849, max=42404, avg=4149.27, stdev=725.85 00:26:56.786 lat (usec): min=862, max=42418, avg=4163.00, stdev=725.79 00:26:56.786 clat percentiles (usec): 00:26:56.786 | 1.00th=[ 2343], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3752], 00:26:56.786 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4293], 00:26:56.786 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:26:56.786 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 7898], 00:26:56.786 | 99.99th=[42206] 00:26:56.786 bw ( KiB/s): min=13888, max=16768, per=25.38%, avg=15252.50, stdev=824.77, samples=10 00:26:56.786 iops : min= 1736, max= 2096, avg=1906.50, stdev=103.12, samples=10 00:26:56.786 lat (usec) : 1000=0.05% 00:26:56.786 lat (msec) : 2=0.55%, 4=42.28%, 10=57.11%, 50=0.01% 00:26:56.786 cpu : usr=92.48%, sys=6.55%, ctx=79, majf=0, minf=9 00:26:56.786 IO depths : 1=0.1%, 2=16.8%, 4=56.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.786 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.786 issued rwts: total=9539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:56.786 00:26:56.786 Run status group 0 (all jobs): 00:26:56.786 READ: bw=58.7MiB/s (61.5MB/s), 14.6MiB/s-14.9MiB/s (15.3MB/s-15.6MB/s), io=296MiB (310MB), run=5002-5043msec 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 00:26:56.786 real 0m24.422s 00:26:56.786 user 4m28.062s 00:26:56.786 sys 0m9.311s 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 ************************************ 00:26:56.786 END TEST fio_dif_rand_params 00:26:56.786 ************************************ 00:26:56.786 23:30:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:56.786 23:30:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:56.786 23:30:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:56.786 23:30:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.786 23:30:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 ************************************ 00:26:56.786 START TEST fio_dif_digest 00:26:56.786 ************************************ 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 bdev_null0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.786 [2024-07-15 23:30:12.057306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.786 { 00:26:56.786 "params": { 00:26:56.786 "name": "Nvme$subsystem", 00:26:56.786 "trtype": "$TEST_TRANSPORT", 00:26:56.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.786 "adrfam": "ipv4", 00:26:56.786 "trsvcid": "$NVMF_PORT", 00:26:56.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.786 "hdgst": ${hdgst:-false}, 00:26:56.786 "ddgst": ${ddgst:-false} 00:26:56.786 }, 00:26:56.786 "method": "bdev_nvme_attach_controller" 00:26:56.786 } 00:26:56.786 EOF 00:26:56.786 )") 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:56.786 "params": { 00:26:56.786 "name": "Nvme0", 00:26:56.786 "trtype": "tcp", 00:26:56.786 "traddr": "10.0.0.2", 00:26:56.786 "adrfam": "ipv4", 00:26:56.786 "trsvcid": "4420", 00:26:56.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:56.786 "hdgst": true, 00:26:56.786 "ddgst": true 00:26:56.786 }, 00:26:56.786 "method": "bdev_nvme_attach_controller" 00:26:56.786 }' 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:56.786 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:57.044 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:57.044 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:57.044 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:57.044 23:30:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.044 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:57.044 ... 00:26:57.044 fio-3.35 00:26:57.044 Starting 3 threads 00:26:57.044 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.238 00:27:09.238 filename0: (groupid=0, jobs=1): err= 0: pid=2462654: Mon Jul 15 23:30:23 2024 00:27:09.238 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(263MiB/10045msec) 00:27:09.238 slat (nsec): min=4948, max=54424, avg=20646.12, stdev=5552.30 00:27:09.238 clat (usec): min=10030, max=51526, avg=14299.74, stdev=1708.78 00:27:09.238 lat (usec): min=10044, max=51541, avg=14320.39, stdev=1708.91 00:27:09.238 clat percentiles (usec): 00:27:09.238 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:27:09.238 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:27:09.238 | 70.00th=[14746], 80.00th=[15270], 90.00th=[15926], 95.00th=[16581], 00:27:09.238 | 99.00th=[17957], 99.50th=[18482], 99.90th=[21103], 99.95th=[50070], 00:27:09.238 | 99.99th=[51643] 00:27:09.238 bw ( KiB/s): min=23296, max=28160, per=32.60%, avg=26854.40, stdev=1416.61, samples=20 00:27:09.238 iops : min= 182, max= 220, avg=209.80, stdev=11.07, samples=20 00:27:09.238 lat (msec) : 20=99.76%, 50=0.14%, 100=0.10% 00:27:09.238 cpu : usr=93.10%, sys=6.40%, ctx=40, majf=0, minf=51 00:27:09.238 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:09.238 filename0: (groupid=0, jobs=1): err= 0: pid=2462655: Mon Jul 15 23:30:23 2024 00:27:09.238 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10045msec) 00:27:09.238 slat (usec): min=4, max=102, avg=20.06, stdev= 6.93 00:27:09.238 clat (usec): min=10429, max=52397, avg=13932.48, stdev=1755.30 00:27:09.238 lat (usec): min=10458, max=52417, avg=13952.54, stdev=1755.21 00:27:09.238 clat percentiles (usec): 00:27:09.238 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12387], 20.00th=[12780], 00:27:09.238 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091], 00:27:09.238 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15664], 95.00th=[16319], 00:27:09.238 | 99.00th=[17695], 99.50th=[17957], 99.90th=[25297], 99.95th=[49021], 00:27:09.238 | 99.99th=[52167] 00:27:09.238 bw ( KiB/s): min=23599, max=29440, per=33.47%, avg=27573.55, stdev=1595.98, samples=20 00:27:09.238 iops : min= 184, max= 230, avg=215.40, stdev=12.52, samples=20 00:27:09.238 lat (msec) : 20=99.77%, 50=0.19%, 100=0.05% 00:27:09.238 cpu : usr=88.91%, sys=7.86%, ctx=622, majf=0, minf=159 00:27:09.238 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:09.238 filename0: (groupid=0, jobs=1): err= 0: pid=2462656: Mon Jul 15 23:30:23 2024 00:27:09.238 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(276MiB/10047msec) 00:27:09.238 slat (nsec): min=4520, max=51554, avg=17817.19, stdev=5061.71 00:27:09.238 clat (usec): min=9624, max=50407, avg=13604.84, stdev=1658.25 00:27:09.238 lat (usec): min=9641, max=50416, avg=13622.65, stdev=1657.93 00:27:09.238 clat percentiles (usec): 00:27:09.238 | 1.00th=[10945], 5.00th=[11600], 10.00th=[12125], 20.00th=[12518], 00:27:09.238 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:27:09.238 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[15795], 00:27:09.238 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20579], 99.95th=[47449], 00:27:09.238 | 99.99th=[50594] 00:27:09.238 bw ( KiB/s): min=24576, max=29952, per=34.28%, avg=28236.80, stdev=1495.26, samples=20 00:27:09.238 iops : min= 192, max= 234, avg=220.60, stdev=11.68, samples=20 00:27:09.238 lat (msec) : 10=0.09%, 20=99.68%, 50=0.18%, 100=0.05% 00:27:09.238 cpu : usr=94.61%, sys=4.89%, ctx=22, majf=0, minf=155 00:27:09.238 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.238 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:09.238 00:27:09.238 Run status group 0 (all jobs): 00:27:09.238 READ: bw=80.4MiB/s (84.4MB/s), 26.1MiB/s-27.5MiB/s (27.4MB/s-28.8MB/s), io=808MiB (848MB), run=10045-10047msec 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.238 00:27:09.238 real 0m11.262s 00:27:09.238 user 0m28.931s 00:27:09.238 sys 0m2.206s 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.238 23:30:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.238 ************************************ 00:27:09.238 END TEST fio_dif_digest 00:27:09.238 ************************************ 00:27:09.238 23:30:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:09.238 23:30:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:09.238 23:30:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.238 rmmod nvme_tcp 00:27:09.238 rmmod nvme_fabrics 00:27:09.238 rmmod nvme_keyring 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.238 23:30:23 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:09.239 23:30:23 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:09.239 23:30:23 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2455779 ']' 00:27:09.239 23:30:23 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2455779 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2455779 ']' 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2455779 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2455779 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2455779' 00:27:09.239 killing process with pid 2455779 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2455779 00:27:09.239 23:30:23 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2455779 00:27:09.239 23:30:23 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:09.239 23:30:23 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:09.496 Waiting for block devices as requested 00:27:09.754 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:09.754 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.754 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:10.011 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.011 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.011 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.269 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.269 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.269 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.269 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:10.269 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:10.526 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.526 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.526 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.526 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.784 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.784 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.784 23:30:26 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.784 23:30:26 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.784 23:30:26 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.784 23:30:26 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.784 23:30:26 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.784 23:30:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.784 23:30:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.313 23:30:28 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:13.313 00:27:13.313 real 1m7.863s 00:27:13.313 user 6m25.067s 00:27:13.313 sys 0m21.837s 00:27:13.313 23:30:28 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.313 23:30:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 ************************************ 00:27:13.313 END TEST nvmf_dif 00:27:13.313 ************************************ 00:27:13.313 23:30:28 -- common/autotest_common.sh@1142 -- # return 0 00:27:13.313 23:30:28 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:13.313 23:30:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:13.313 23:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.313 23:30:28 -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 ************************************ 00:27:13.313 START TEST nvmf_abort_qd_sizes 00:27:13.313 ************************************ 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:13.313 * Looking for test storage... 00:27:13.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.313 23:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:15.215 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:15.215 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:15.215 Found net devices under 0000:84:00.0: cvl_0_0 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:15.215 Found net devices under 0000:84:00.1: cvl_0_1 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.215 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:27:15.216 00:27:15.216 --- 10.0.0.2 ping statistics --- 00:27:15.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.216 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:27:15.216 00:27:15.216 --- 10.0.0.1 ping statistics --- 00:27:15.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.216 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:15.216 23:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:16.152 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:16.152 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:16.410 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:16.410 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:17.343 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2467477 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2467477 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2467477 ']' 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.343 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.343 [2024-07-15 23:30:32.650138] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:27:17.343 [2024-07-15 23:30:32.650224] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.633 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.633 [2024-07-15 23:30:32.714214] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.633 [2024-07-15 23:30:32.834818] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.633 [2024-07-15 23:30:32.834867] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.633 [2024-07-15 23:30:32.834881] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.633 [2024-07-15 23:30:32.834892] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.633 [2024-07-15 23:30:32.834903] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.633 [2024-07-15 23:30:32.834960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.633 [2024-07-15 23:30:32.834991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.633 [2024-07-15 23:30:32.835047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.633 [2024-07-15 23:30:32.835050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.921 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.922 23:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.922 ************************************ 00:27:17.922 START TEST spdk_target_abort 00:27:17.922 ************************************ 00:27:17.922 23:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:17.922 23:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:17.922 23:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:27:17.922 23:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.922 23:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 spdk_targetn1 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 [2024-07-15 23:30:35.863045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 [2024-07-15 23:30:35.895303] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:21.198 23:30:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.198 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.721 Initializing NVMe Controllers 00:27:23.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:23.721 Initialization complete. Launching workers. 00:27:23.721 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11452, failed: 0 00:27:23.721 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1299, failed to submit 10153 00:27:23.721 success 749, unsuccess 550, failed 0 00:27:23.978 23:30:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:23.978 23:30:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.978 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.259 Initializing NVMe Controllers 00:27:27.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:27.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:27.259 Initialization complete. Launching workers. 00:27:27.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8516, failed: 0 00:27:27.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7286 00:27:27.259 success 332, unsuccess 898, failed 0 00:27:27.259 23:30:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:27.259 23:30:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:27.259 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.535 Initializing NVMe Controllers 00:27:30.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:30.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:30.535 Initialization complete. Launching workers. 00:27:30.535 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31515, failed: 0 00:27:30.535 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2819, failed to submit 28696 00:27:30.535 success 525, unsuccess 2294, failed 0 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.535 23:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2467477 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2467477 ']' 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2467477 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2467477 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2467477' 00:27:31.906 killing process with pid 2467477 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2467477 00:27:31.906 23:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2467477 00:27:32.163 00:27:32.163 real 0m14.199s 00:27:32.163 user 0m53.562s 00:27:32.163 sys 0m2.854s 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.163 ************************************ 00:27:32.163 END TEST spdk_target_abort 00:27:32.163 ************************************ 00:27:32.163 23:30:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:32.163 23:30:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:32.163 23:30:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:32.163 23:30:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.163 23:30:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:32.163 ************************************ 00:27:32.163 START TEST kernel_target_abort 00:27:32.163 ************************************ 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:32.163 23:30:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:33.096 Waiting for block devices as requested 00:27:33.096 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:33.354 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:33.354 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:33.612 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:33.612 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:33.612 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:33.612 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:33.870 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:33.870 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:33.870 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:33.870 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:34.127 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:34.127 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:34.127 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:34.127 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:34.127 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:34.385 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:34.385 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:34.386 No valid GPT data, bailing 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:34.386 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:34.644 00:27:34.644 Discovery Log Number of Records 2, Generation counter 2 00:27:34.644 =====Discovery Log Entry 0====== 00:27:34.644 trtype: tcp 00:27:34.644 adrfam: ipv4 00:27:34.644 subtype: current discovery subsystem 00:27:34.644 treq: not specified, sq flow control disable supported 00:27:34.644 portid: 1 00:27:34.644 trsvcid: 4420 00:27:34.644 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:34.644 traddr: 10.0.0.1 00:27:34.644 eflags: none 00:27:34.644 sectype: none 00:27:34.644 =====Discovery Log Entry 1====== 00:27:34.644 trtype: tcp 00:27:34.644 adrfam: ipv4 00:27:34.644 subtype: nvme subsystem 00:27:34.644 treq: not specified, sq flow control disable supported 00:27:34.644 portid: 1 00:27:34.644 trsvcid: 4420 00:27:34.645 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:34.645 traddr: 10.0.0.1 00:27:34.645 eflags: none 00:27:34.645 sectype: none 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:34.645 23:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:34.645 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.983 Initializing NVMe Controllers 00:27:37.983 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:37.983 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:37.983 Initialization complete. Launching workers. 00:27:37.983 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37649, failed: 0 00:27:37.983 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37649, failed to submit 0 00:27:37.983 success 0, unsuccess 37649, failed 0 00:27:37.983 23:30:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:37.983 23:30:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:37.983 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.257 Initializing NVMe Controllers 00:27:41.257 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:41.257 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:41.257 Initialization complete. Launching workers. 00:27:41.257 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75692, failed: 0 00:27:41.257 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19082, failed to submit 56610 00:27:41.257 success 0, unsuccess 19082, failed 0 00:27:41.257 23:30:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:41.257 23:30:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:41.257 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.530 Initializing NVMe Controllers 00:27:44.530 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:44.530 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:44.530 Initialization complete. Launching workers. 00:27:44.530 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69719, failed: 0 00:27:44.530 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17430, failed to submit 52289 00:27:44.530 success 0, unsuccess 17430, failed 0 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:44.530 23:30:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:45.094 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:45.094 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:45.094 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:46.029 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:46.288 00:27:46.288 real 0m14.150s 00:27:46.288 user 0m5.830s 00:27:46.288 sys 0m3.235s 00:27:46.288 23:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.288 23:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.288 ************************************ 00:27:46.288 END TEST kernel_target_abort 00:27:46.288 ************************************ 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.288 rmmod nvme_tcp 00:27:46.288 rmmod nvme_fabrics 00:27:46.288 rmmod nvme_keyring 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2467477 ']' 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2467477 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2467477 ']' 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2467477 00:27:46.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2467477) - No such process 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2467477 is not found' 00:27:46.288 Process with pid 2467477 is not found 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:46.288 23:31:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:47.222 Waiting for block devices as requested 00:27:47.222 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:47.481 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:47.481 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:47.739 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:47.739 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:47.739 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:47.739 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:47.997 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:47.997 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:47.997 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:47.997 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:48.254 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:48.254 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:48.254 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:48.511 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:48.511 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:48.511 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:48.511 23:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.511 23:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.511 23:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.511 23:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.512 23:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.512 23:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:48.512 23:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.039 23:31:05 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:51.039 00:27:51.039 real 0m37.697s 00:27:51.039 user 1m1.480s 00:27:51.039 sys 0m9.356s 00:27:51.039 23:31:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:51.039 23:31:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:51.039 ************************************ 00:27:51.039 END TEST nvmf_abort_qd_sizes 00:27:51.039 ************************************ 00:27:51.039 23:31:05 -- common/autotest_common.sh@1142 -- # return 0 00:27:51.039 23:31:05 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:51.039 23:31:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:51.039 23:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:51.039 23:31:05 -- common/autotest_common.sh@10 -- # set +x 00:27:51.039 ************************************ 00:27:51.039 START TEST keyring_file 00:27:51.039 ************************************ 00:27:51.039 23:31:05 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:51.039 * Looking for test storage... 00:27:51.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:51.039 23:31:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:51.039 23:31:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:51.039 23:31:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.040 23:31:05 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.040 23:31:05 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.040 23:31:05 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.040 23:31:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.040 23:31:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.040 23:31:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.040 23:31:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:51.040 23:31:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:51.040 23:31:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Tkq0cBDxZk 00:27:51.040 23:31:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:51.040 23:31:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Tkq0cBDxZk 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Tkq0cBDxZk 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Tkq0cBDxZk 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jbKimk0w5k 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:51.040 23:31:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jbKimk0w5k 00:27:51.040 23:31:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jbKimk0w5k 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jbKimk0w5k 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=2473252 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:51.040 23:31:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2473252 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2473252 ']' 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.040 23:31:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.040 [2024-07-15 23:31:06.109036] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:27:51.040 [2024-07-15 23:31:06.109138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473252 ] 00:27:51.040 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.040 [2024-07-15 23:31:06.167827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.040 [2024-07-15 23:31:06.274202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:51.298 23:31:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.298 [2024-07-15 23:31:06.521342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.298 null0 00:27:51.298 [2024-07-15 23:31:06.553392] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:51.298 [2024-07-15 23:31:06.553903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.298 [2024-07-15 23:31:06.561392] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.298 23:31:06 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.298 [2024-07-15 23:31:06.569404] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:51.298 request: 00:27:51.298 { 00:27:51.298 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.298 "secure_channel": false, 00:27:51.298 "listen_address": { 00:27:51.298 "trtype": "tcp", 00:27:51.298 "traddr": "127.0.0.1", 00:27:51.298 "trsvcid": "4420" 00:27:51.298 }, 00:27:51.298 "method": "nvmf_subsystem_add_listener", 00:27:51.298 "req_id": 1 00:27:51.298 } 00:27:51.298 Got JSON-RPC error response 00:27:51.298 response: 00:27:51.298 { 00:27:51.298 "code": -32602, 00:27:51.298 "message": "Invalid parameters" 00:27:51.298 } 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.298 23:31:06 keyring_file -- keyring/file.sh@46 -- # bperfpid=2473263 00:27:51.298 23:31:06 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:51.298 23:31:06 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2473263 /var/tmp/bperf.sock 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2473263 ']' 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.298 23:31:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.557 [2024-07-15 23:31:06.621297] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:27:51.557 [2024-07-15 23:31:06.621374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473263 ] 00:27:51.557 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.557 [2024-07-15 23:31:06.687783] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.557 [2024-07-15 23:31:06.806048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.490 23:31:07 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.490 23:31:07 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:52.490 23:31:07 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:52.490 23:31:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:52.490 23:31:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jbKimk0w5k 00:27:52.490 23:31:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jbKimk0w5k 00:27:52.748 23:31:08 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:52.748 23:31:08 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:52.748 23:31:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.748 23:31:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.748 23:31:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:53.006 23:31:08 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Tkq0cBDxZk == \/\t\m\p\/\t\m\p\.\T\k\q\0\c\B\D\x\Z\k ]] 00:27:53.006 23:31:08 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:53.006 23:31:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:53.006 23:31:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.006 23:31:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.006 23:31:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:53.264 23:31:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jbKimk0w5k == \/\t\m\p\/\t\m\p\.\j\b\K\i\m\k\0\w\5\k ]] 00:27:53.264 23:31:08 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:53.264 23:31:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:53.264 23:31:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:53.264 23:31:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.264 23:31:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.264 23:31:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:53.522 23:31:08 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:53.522 23:31:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:53.522 23:31:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:53.522 23:31:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:53.523 23:31:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.523 23:31:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.523 23:31:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:53.781 23:31:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:53.781 23:31:09 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:53.781 23:31:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:54.039 [2024-07-15 23:31:09.286306] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:54.039 nvme0n1 00:27:54.296 23:31:09 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:54.296 23:31:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:54.296 23:31:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.296 23:31:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.296 23:31:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.296 23:31:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:54.554 23:31:09 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:54.554 23:31:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:54.554 23:31:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:54.554 23:31:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.554 23:31:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.554 23:31:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:54.554 23:31:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.813 23:31:09 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:54.813 23:31:09 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.813 Running I/O for 1 seconds... 00:27:55.747 00:27:55.747 Latency(us) 00:27:55.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.747 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:55.747 nvme0n1 : 1.01 6891.38 26.92 0.00 0.00 18456.70 9466.31 30874.74 00:27:55.747 =================================================================================================================== 00:27:55.747 Total : 6891.38 26.92 0.00 0.00 18456.70 9466.31 30874.74 00:27:55.747 0 00:27:55.747 23:31:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:55.747 23:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:56.005 23:31:11 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:56.005 23:31:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:56.005 23:31:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.005 23:31:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.005 23:31:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:56.005 23:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.263 23:31:11 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:56.263 23:31:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:56.263 23:31:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:56.263 23:31:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.263 23:31:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.263 23:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.263 23:31:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:56.521 23:31:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:56.521 23:31:11 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.521 23:31:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.521 23:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.779 [2024-07-15 23:31:11.969616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:56.779 [2024-07-15 23:31:11.970168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c8100 (107): Transport endpoint is not connected 00:27:56.779 [2024-07-15 23:31:11.971160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c8100 (9): Bad file descriptor 00:27:56.779 [2024-07-15 23:31:11.972158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.779 [2024-07-15 23:31:11.972183] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:56.779 [2024-07-15 23:31:11.972199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.779 request: 00:27:56.779 { 00:27:56.779 "name": "nvme0", 00:27:56.779 "trtype": "tcp", 00:27:56.779 "traddr": "127.0.0.1", 00:27:56.779 "adrfam": "ipv4", 00:27:56.779 "trsvcid": "4420", 00:27:56.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:56.779 "prchk_reftag": false, 00:27:56.779 "prchk_guard": false, 00:27:56.779 "hdgst": false, 00:27:56.779 "ddgst": false, 00:27:56.779 "psk": "key1", 00:27:56.779 "method": "bdev_nvme_attach_controller", 00:27:56.779 "req_id": 1 00:27:56.779 } 00:27:56.779 Got JSON-RPC error response 00:27:56.779 response: 00:27:56.779 { 00:27:56.779 "code": -5, 00:27:56.779 "message": "Input/output error" 00:27:56.779 } 00:27:56.779 23:31:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:56.779 23:31:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:56.779 23:31:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:56.779 23:31:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:56.779 23:31:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:56.779 23:31:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:56.779 23:31:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.779 23:31:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.779 23:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.779 23:31:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:57.037 23:31:12 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:57.037 23:31:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:57.037 23:31:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:57.037 23:31:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:57.037 23:31:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:57.037 23:31:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:57.037 23:31:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:57.295 23:31:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:57.295 23:31:12 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:57.295 23:31:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:57.552 23:31:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:57.552 23:31:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:57.810 23:31:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:57.810 23:31:13 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:57.810 23:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.068 23:31:13 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:58.068 23:31:13 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Tkq0cBDxZk 00:27:58.068 23:31:13 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.068 23:31:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.068 23:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.326 [2024-07-15 23:31:13.473610] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tkq0cBDxZk': 0100660 00:27:58.326 [2024-07-15 23:31:13.473650] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:58.326 request: 00:27:58.326 { 00:27:58.326 "name": "key0", 00:27:58.326 "path": "/tmp/tmp.Tkq0cBDxZk", 00:27:58.326 "method": "keyring_file_add_key", 00:27:58.326 "req_id": 1 00:27:58.326 } 00:27:58.326 Got JSON-RPC error response 00:27:58.326 response: 00:27:58.326 { 00:27:58.326 "code": -1, 00:27:58.326 "message": "Operation not permitted" 00:27:58.327 } 00:27:58.327 23:31:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:58.327 23:31:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.327 23:31:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.327 23:31:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.327 23:31:13 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Tkq0cBDxZk 00:27:58.327 23:31:13 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.327 23:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tkq0cBDxZk 00:27:58.583 23:31:13 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Tkq0cBDxZk 00:27:58.583 23:31:13 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:58.583 23:31:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:58.583 23:31:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.583 23:31:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.583 23:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.583 23:31:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.840 23:31:13 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:58.840 23:31:13 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.840 23:31:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:58.840 23:31:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.840 23:31:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:58.840 23:31:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.840 23:31:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:58.840 23:31:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.840 23:31:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.840 23:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.097 [2024-07-15 23:31:14.231696] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Tkq0cBDxZk': No such file or directory 00:27:59.097 [2024-07-15 23:31:14.231734] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:59.097 [2024-07-15 23:31:14.231796] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:59.097 [2024-07-15 23:31:14.231809] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:59.097 [2024-07-15 23:31:14.231820] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:59.097 request: 00:27:59.097 { 00:27:59.097 "name": "nvme0", 00:27:59.097 "trtype": "tcp", 00:27:59.097 "traddr": "127.0.0.1", 00:27:59.097 "adrfam": "ipv4", 00:27:59.097 "trsvcid": "4420", 00:27:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:59.097 "prchk_reftag": false, 00:27:59.097 "prchk_guard": false, 00:27:59.097 "hdgst": false, 00:27:59.097 "ddgst": false, 00:27:59.097 "psk": "key0", 00:27:59.097 "method": "bdev_nvme_attach_controller", 00:27:59.097 "req_id": 1 00:27:59.097 } 00:27:59.097 Got JSON-RPC error response 00:27:59.097 response: 00:27:59.097 { 00:27:59.097 "code": -19, 00:27:59.097 "message": "No such device" 00:27:59.097 } 00:27:59.097 23:31:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:59.097 23:31:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.097 23:31:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.097 23:31:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.097 23:31:14 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:59.097 23:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:59.354 23:31:14 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:59.354 23:31:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:59.354 23:31:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:59.354 23:31:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:59.354 23:31:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:59.354 23:31:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:59.355 23:31:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.N8CmK5WKep 00:27:59.355 23:31:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:59.355 23:31:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:59.355 23:31:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.N8CmK5WKep 00:27:59.355 23:31:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.N8CmK5WKep 00:27:59.355 23:31:14 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.N8CmK5WKep 00:27:59.355 23:31:14 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.N8CmK5WKep 00:27:59.355 23:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.N8CmK5WKep 00:27:59.612 23:31:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.612 23:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.870 nvme0n1 00:27:59.870 23:31:15 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:59.870 23:31:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:59.870 23:31:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:59.870 23:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:59.870 23:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.870 23:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.128 23:31:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:00.128 23:31:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:00.128 23:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:00.408 23:31:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:00.408 23:31:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:00.408 23:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.408 23:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.408 23:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.666 23:31:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:00.666 23:31:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:00.666 23:31:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:00.666 23:31:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:00.666 23:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.666 23:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.666 23:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.924 23:31:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:00.924 23:31:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:00.924 23:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:01.182 23:31:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:01.182 23:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.182 23:31:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:01.440 23:31:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:01.440 23:31:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.N8CmK5WKep 00:28:01.441 23:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.N8CmK5WKep 00:28:01.699 23:31:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jbKimk0w5k 00:28:01.699 23:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jbKimk0w5k 00:28:01.955 23:31:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.955 23:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:02.212 nvme0n1 00:28:02.212 23:31:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:02.212 23:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:02.470 23:31:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:02.470 "subsystems": [ 00:28:02.470 { 00:28:02.470 "subsystem": "keyring", 00:28:02.470 "config": [ 00:28:02.470 { 00:28:02.470 "method": "keyring_file_add_key", 00:28:02.470 "params": { 00:28:02.470 "name": "key0", 00:28:02.470 "path": "/tmp/tmp.N8CmK5WKep" 00:28:02.470 } 00:28:02.470 }, 00:28:02.470 { 00:28:02.470 "method": "keyring_file_add_key", 00:28:02.471 "params": { 00:28:02.471 "name": "key1", 00:28:02.471 "path": "/tmp/tmp.jbKimk0w5k" 00:28:02.471 } 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "iobuf", 00:28:02.471 "config": [ 00:28:02.471 { 00:28:02.471 "method": "iobuf_set_options", 00:28:02.471 "params": { 00:28:02.471 "small_pool_count": 8192, 00:28:02.471 "large_pool_count": 1024, 00:28:02.471 "small_bufsize": 8192, 00:28:02.471 "large_bufsize": 135168 00:28:02.471 } 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "sock", 00:28:02.471 "config": [ 00:28:02.471 { 00:28:02.471 "method": "sock_set_default_impl", 00:28:02.471 "params": { 00:28:02.471 "impl_name": "posix" 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "sock_impl_set_options", 00:28:02.471 "params": { 00:28:02.471 "impl_name": "ssl", 00:28:02.471 "recv_buf_size": 4096, 00:28:02.471 "send_buf_size": 4096, 00:28:02.471 "enable_recv_pipe": true, 00:28:02.471 "enable_quickack": false, 00:28:02.471 "enable_placement_id": 0, 00:28:02.471 "enable_zerocopy_send_server": true, 00:28:02.471 "enable_zerocopy_send_client": false, 00:28:02.471 "zerocopy_threshold": 0, 00:28:02.471 "tls_version": 0, 00:28:02.471 "enable_ktls": false 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "sock_impl_set_options", 00:28:02.471 "params": { 00:28:02.471 "impl_name": "posix", 00:28:02.471 "recv_buf_size": 2097152, 00:28:02.471 "send_buf_size": 2097152, 00:28:02.471 "enable_recv_pipe": true, 00:28:02.471 "enable_quickack": false, 00:28:02.471 "enable_placement_id": 0, 00:28:02.471 "enable_zerocopy_send_server": true, 00:28:02.471 "enable_zerocopy_send_client": false, 00:28:02.471 "zerocopy_threshold": 0, 00:28:02.471 "tls_version": 0, 00:28:02.471 "enable_ktls": false 00:28:02.471 } 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "vmd", 00:28:02.471 "config": [] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "accel", 00:28:02.471 "config": [ 00:28:02.471 { 00:28:02.471 "method": "accel_set_options", 00:28:02.471 "params": { 00:28:02.471 "small_cache_size": 128, 00:28:02.471 "large_cache_size": 16, 00:28:02.471 "task_count": 2048, 00:28:02.471 "sequence_count": 2048, 00:28:02.471 "buf_count": 2048 00:28:02.471 } 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "bdev", 00:28:02.471 "config": [ 00:28:02.471 { 00:28:02.471 "method": "bdev_set_options", 00:28:02.471 "params": { 00:28:02.471 "bdev_io_pool_size": 65535, 00:28:02.471 "bdev_io_cache_size": 256, 00:28:02.471 "bdev_auto_examine": true, 00:28:02.471 "iobuf_small_cache_size": 128, 00:28:02.471 "iobuf_large_cache_size": 16 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_raid_set_options", 00:28:02.471 "params": { 00:28:02.471 "process_window_size_kb": 1024 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_iscsi_set_options", 00:28:02.471 "params": { 00:28:02.471 "timeout_sec": 30 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_nvme_set_options", 00:28:02.471 "params": { 00:28:02.471 "action_on_timeout": "none", 00:28:02.471 "timeout_us": 0, 00:28:02.471 "timeout_admin_us": 0, 00:28:02.471 "keep_alive_timeout_ms": 10000, 00:28:02.471 "arbitration_burst": 0, 00:28:02.471 "low_priority_weight": 0, 00:28:02.471 "medium_priority_weight": 0, 00:28:02.471 "high_priority_weight": 0, 00:28:02.471 "nvme_adminq_poll_period_us": 10000, 00:28:02.471 "nvme_ioq_poll_period_us": 0, 00:28:02.471 "io_queue_requests": 512, 00:28:02.471 "delay_cmd_submit": true, 00:28:02.471 "transport_retry_count": 4, 00:28:02.471 "bdev_retry_count": 3, 00:28:02.471 "transport_ack_timeout": 0, 00:28:02.471 "ctrlr_loss_timeout_sec": 0, 00:28:02.471 "reconnect_delay_sec": 0, 00:28:02.471 "fast_io_fail_timeout_sec": 0, 00:28:02.471 "disable_auto_failback": false, 00:28:02.471 "generate_uuids": false, 00:28:02.471 "transport_tos": 0, 00:28:02.471 "nvme_error_stat": false, 00:28:02.471 "rdma_srq_size": 0, 00:28:02.471 "io_path_stat": false, 00:28:02.471 "allow_accel_sequence": false, 00:28:02.471 "rdma_max_cq_size": 0, 00:28:02.471 "rdma_cm_event_timeout_ms": 0, 00:28:02.471 "dhchap_digests": [ 00:28:02.471 "sha256", 00:28:02.471 "sha384", 00:28:02.471 "sha512" 00:28:02.471 ], 00:28:02.471 "dhchap_dhgroups": [ 00:28:02.471 "null", 00:28:02.471 "ffdhe2048", 00:28:02.471 "ffdhe3072", 00:28:02.471 "ffdhe4096", 00:28:02.471 "ffdhe6144", 00:28:02.471 "ffdhe8192" 00:28:02.471 ] 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_nvme_attach_controller", 00:28:02.471 "params": { 00:28:02.471 "name": "nvme0", 00:28:02.471 "trtype": "TCP", 00:28:02.471 "adrfam": "IPv4", 00:28:02.471 "traddr": "127.0.0.1", 00:28:02.471 "trsvcid": "4420", 00:28:02.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.471 "prchk_reftag": false, 00:28:02.471 "prchk_guard": false, 00:28:02.471 "ctrlr_loss_timeout_sec": 0, 00:28:02.471 "reconnect_delay_sec": 0, 00:28:02.471 "fast_io_fail_timeout_sec": 0, 00:28:02.471 "psk": "key0", 00:28:02.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.471 "hdgst": false, 00:28:02.471 "ddgst": false 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_nvme_set_hotplug", 00:28:02.471 "params": { 00:28:02.471 "period_us": 100000, 00:28:02.471 "enable": false 00:28:02.471 } 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "method": "bdev_wait_for_examine" 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }, 00:28:02.471 { 00:28:02.471 "subsystem": "nbd", 00:28:02.471 "config": [] 00:28:02.471 } 00:28:02.471 ] 00:28:02.471 }' 00:28:02.471 23:31:17 keyring_file -- keyring/file.sh@114 -- # killprocess 2473263 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2473263 ']' 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2473263 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2473263 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2473263' 00:28:02.471 killing process with pid 2473263 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@967 -- # kill 2473263 00:28:02.471 Received shutdown signal, test time was about 1.000000 seconds 00:28:02.471 00:28:02.471 Latency(us) 00:28:02.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.471 =================================================================================================================== 00:28:02.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.471 23:31:17 keyring_file -- common/autotest_common.sh@972 -- # wait 2473263 00:28:02.729 23:31:18 keyring_file -- keyring/file.sh@117 -- # bperfpid=2474730 00:28:02.729 23:31:18 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2474730 /var/tmp/bperf.sock 00:28:02.729 23:31:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2474730 ']' 00:28:02.729 23:31:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.729 23:31:18 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:02.729 23:31:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.729 23:31:18 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:02.729 "subsystems": [ 00:28:02.729 { 00:28:02.729 "subsystem": "keyring", 00:28:02.729 "config": [ 00:28:02.729 { 00:28:02.729 "method": "keyring_file_add_key", 00:28:02.729 "params": { 00:28:02.729 "name": "key0", 00:28:02.729 "path": "/tmp/tmp.N8CmK5WKep" 00:28:02.729 } 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "method": "keyring_file_add_key", 00:28:02.729 "params": { 00:28:02.729 "name": "key1", 00:28:02.729 "path": "/tmp/tmp.jbKimk0w5k" 00:28:02.729 } 00:28:02.729 } 00:28:02.729 ] 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "subsystem": "iobuf", 00:28:02.729 "config": [ 00:28:02.729 { 00:28:02.729 "method": "iobuf_set_options", 00:28:02.729 "params": { 00:28:02.729 "small_pool_count": 8192, 00:28:02.729 "large_pool_count": 1024, 00:28:02.729 "small_bufsize": 8192, 00:28:02.729 "large_bufsize": 135168 00:28:02.729 } 00:28:02.729 } 00:28:02.729 ] 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "subsystem": "sock", 00:28:02.729 "config": [ 00:28:02.729 { 00:28:02.729 "method": "sock_set_default_impl", 00:28:02.729 "params": { 00:28:02.729 "impl_name": "posix" 00:28:02.729 } 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "method": "sock_impl_set_options", 00:28:02.729 "params": { 00:28:02.729 "impl_name": "ssl", 00:28:02.729 "recv_buf_size": 4096, 00:28:02.729 "send_buf_size": 4096, 00:28:02.729 "enable_recv_pipe": true, 00:28:02.729 "enable_quickack": false, 00:28:02.729 "enable_placement_id": 0, 00:28:02.729 "enable_zerocopy_send_server": true, 00:28:02.729 "enable_zerocopy_send_client": false, 00:28:02.729 "zerocopy_threshold": 0, 00:28:02.729 "tls_version": 0, 00:28:02.729 "enable_ktls": false 00:28:02.729 } 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "method": "sock_impl_set_options", 00:28:02.729 "params": { 00:28:02.729 "impl_name": "posix", 00:28:02.729 "recv_buf_size": 2097152, 00:28:02.729 "send_buf_size": 2097152, 00:28:02.729 "enable_recv_pipe": true, 00:28:02.729 "enable_quickack": false, 00:28:02.729 "enable_placement_id": 0, 00:28:02.729 "enable_zerocopy_send_server": true, 00:28:02.729 "enable_zerocopy_send_client": false, 00:28:02.729 "zerocopy_threshold": 0, 00:28:02.729 "tls_version": 0, 00:28:02.729 "enable_ktls": false 00:28:02.729 } 00:28:02.729 } 00:28:02.729 ] 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "subsystem": "vmd", 00:28:02.729 "config": [] 00:28:02.729 }, 00:28:02.729 { 00:28:02.729 "subsystem": "accel", 00:28:02.729 "config": [ 00:28:02.730 { 00:28:02.730 "method": "accel_set_options", 00:28:02.730 "params": { 00:28:02.730 "small_cache_size": 128, 00:28:02.730 "large_cache_size": 16, 00:28:02.730 "task_count": 2048, 00:28:02.730 "sequence_count": 2048, 00:28:02.730 "buf_count": 2048 00:28:02.730 } 00:28:02.730 } 00:28:02.730 ] 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "subsystem": "bdev", 00:28:02.730 "config": [ 00:28:02.730 { 00:28:02.730 "method": "bdev_set_options", 00:28:02.730 "params": { 00:28:02.730 "bdev_io_pool_size": 65535, 00:28:02.730 "bdev_io_cache_size": 256, 00:28:02.730 "bdev_auto_examine": true, 00:28:02.730 "iobuf_small_cache_size": 128, 00:28:02.730 "iobuf_large_cache_size": 16 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_raid_set_options", 00:28:02.730 "params": { 00:28:02.730 "process_window_size_kb": 1024 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_iscsi_set_options", 00:28:02.730 "params": { 00:28:02.730 "timeout_sec": 30 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_nvme_set_options", 00:28:02.730 "params": { 00:28:02.730 "action_on_timeout": "none", 00:28:02.730 "timeout_us": 0, 00:28:02.730 "timeout_admin_us": 0, 00:28:02.730 "keep_alive_timeout_ms": 10000, 00:28:02.730 "arbitration_burst": 0, 00:28:02.730 "low_priority_weight": 0, 00:28:02.730 "medium_priority_weight": 0, 00:28:02.730 "high_priority_weight": 0, 00:28:02.730 "nvme_adminq_poll_period_us": 10000, 00:28:02.730 "nvme_ioq_poll_period_us": 0, 00:28:02.730 "io_queue_requests": 512, 00:28:02.730 "delay_cmd_submit": true, 00:28:02.730 "transport_retry_count": 4, 00:28:02.730 "bdev_retry_count": 3, 00:28:02.730 "transport_ack_timeout": 0, 00:28:02.730 "ctrlr_loss_timeout_sec": 0, 00:28:02.730 "reconnect_delay_sec": 0, 00:28:02.730 "fast_io_fail_timeout_sec": 0, 00:28:02.730 "disable_auto_failback": false, 00:28:02.730 "generate_uuids": false, 00:28:02.730 "transport_tos": 0, 00:28:02.730 "nvme_error_stat": false, 00:28:02.730 "rdma_srq_size": 0, 00:28:02.730 "io_path_stat": false, 00:28:02.730 "allow_accel_sequence": false, 00:28:02.730 "rdma_max_cq_size": 0, 00:28:02.730 "rdma_cm_event_timeout_ms": 0, 00:28:02.730 "dhchap_digests": [ 00:28:02.730 "sha256", 00:28:02.730 "sha384", 00:28:02.730 "sha512" 00:28:02.730 ], 00:28:02.730 "dhchap_dhgroups": [ 00:28:02.730 "null", 00:28:02.730 "ffdhe2048", 00:28:02.730 "ffdhe3072", 00:28:02.730 "ffdhe4096", 00:28:02.730 "ffdhe6144", 00:28:02.730 "ffdhe8192" 00:28:02.730 ] 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_nvme_attach_controller", 00:28:02.730 "params": { 00:28:02.730 "name": "nvme0", 00:28:02.730 "trtype": "TCP", 00:28:02.730 "adrfam": "IPv4", 00:28:02.730 "traddr": "127.0.0.1", 00:28:02.730 "trsvcid": "4420", 00:28:02.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.730 "prchk_reftag": false, 00:28:02.730 "prchk_guard": false, 00:28:02.730 "ctrlr_loss_timeout_sec": 0, 00:28:02.730 "reconnect_delay_sec": 0, 00:28:02.730 "fast_io_fail_timeout_sec": 0, 00:28:02.730 "psk": "key0", 00:28:02.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.730 "hdgst": false, 00:28:02.730 "ddgst": false 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_nvme_set_hotplug", 00:28:02.730 "params": { 00:28:02.730 "period_us": 100000, 00:28:02.730 "enable": false 00:28:02.730 } 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "method": "bdev_wait_for_examine" 00:28:02.730 } 00:28:02.730 ] 00:28:02.730 }, 00:28:02.730 { 00:28:02.730 "subsystem": "nbd", 00:28:02.730 "config": [] 00:28:02.730 } 00:28:02.730 ] 00:28:02.730 }' 00:28:02.730 23:31:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.730 23:31:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.730 23:31:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:02.988 [2024-07-15 23:31:18.052065] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:28:02.988 [2024-07-15 23:31:18.052159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474730 ] 00:28:02.988 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.988 [2024-07-15 23:31:18.109942] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.988 [2024-07-15 23:31:18.219211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.246 [2024-07-15 23:31:18.403569] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:03.810 23:31:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.810 23:31:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:03.810 23:31:19 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:03.810 23:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.810 23:31:19 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:04.066 23:31:19 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:04.066 23:31:19 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:04.066 23:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:04.067 23:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.067 23:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.067 23:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.067 23:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.324 23:31:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:04.324 23:31:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:04.324 23:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.324 23:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.324 23:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.324 23:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.324 23:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.580 23:31:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:04.580 23:31:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:04.580 23:31:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:04.580 23:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:04.838 23:31:20 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:04.838 23:31:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:04.838 23:31:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.N8CmK5WKep /tmp/tmp.jbKimk0w5k 00:28:04.838 23:31:20 keyring_file -- keyring/file.sh@20 -- # killprocess 2474730 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2474730 ']' 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2474730 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2474730 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2474730' 00:28:04.838 killing process with pid 2474730 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@967 -- # kill 2474730 00:28:04.838 Received shutdown signal, test time was about 1.000000 seconds 00:28:04.838 00:28:04.838 Latency(us) 00:28:04.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.838 =================================================================================================================== 00:28:04.838 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:04.838 23:31:20 keyring_file -- common/autotest_common.sh@972 -- # wait 2474730 00:28:05.095 23:31:20 keyring_file -- keyring/file.sh@21 -- # killprocess 2473252 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2473252 ']' 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2473252 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2473252 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2473252' 00:28:05.095 killing process with pid 2473252 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@967 -- # kill 2473252 00:28:05.095 [2024-07-15 23:31:20.314650] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:05.095 23:31:20 keyring_file -- common/autotest_common.sh@972 -- # wait 2473252 00:28:05.658 00:28:05.658 real 0m14.849s 00:28:05.658 user 0m36.814s 00:28:05.658 sys 0m3.243s 00:28:05.658 23:31:20 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.658 23:31:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:05.658 ************************************ 00:28:05.658 END TEST keyring_file 00:28:05.658 ************************************ 00:28:05.658 23:31:20 -- common/autotest_common.sh@1142 -- # return 0 00:28:05.658 23:31:20 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:05.658 23:31:20 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:05.658 23:31:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.658 23:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.658 23:31:20 -- common/autotest_common.sh@10 -- # set +x 00:28:05.658 ************************************ 00:28:05.658 START TEST keyring_linux 00:28:05.658 ************************************ 00:28:05.658 23:31:20 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:05.658 * Looking for test storage... 00:28:05.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.658 23:31:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.658 23:31:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.658 23:31:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.658 23:31:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 23:31:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 23:31:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 23:31:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:05.658 23:31:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:05.658 23:31:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:05.658 23:31:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:05.658 23:31:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:05.659 /tmp/:spdk-test:key0 00:28:05.659 23:31:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:05.659 23:31:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:05.659 23:31:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:05.659 /tmp/:spdk-test:key1 00:28:05.659 23:31:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2475214 00:28:05.659 23:31:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:05.659 23:31:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2475214 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2475214 ']' 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.659 23:31:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:05.915 [2024-07-15 23:31:21.000602] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:28:05.915 [2024-07-15 23:31:21.000684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475214 ] 00:28:05.915 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.915 [2024-07-15 23:31:21.063406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.915 [2024-07-15 23:31:21.180628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.847 23:31:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.847 23:31:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:06.847 23:31:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:06.847 23:31:21 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.847 23:31:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:06.847 [2024-07-15 23:31:21.933098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.847 null0 00:28:06.847 [2024-07-15 23:31:21.965146] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:06.847 [2024-07-15 23:31:21.965653] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:06.847 23:31:21 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.847 23:31:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:06.847 798321683 00:28:06.847 23:31:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:06.847 370435662 00:28:06.847 23:31:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2475351 00:28:06.848 23:31:21 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:06.848 23:31:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2475351 /var/tmp/bperf.sock 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2475351 ']' 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.848 23:31:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:06.848 [2024-07-15 23:31:22.031044] Starting SPDK v24.09-pre git sha1 c1860effd / DPDK 24.03.0 initialization... 00:28:06.848 [2024-07-15 23:31:22.031142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475351 ] 00:28:06.848 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.848 [2024-07-15 23:31:22.093134] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.209 [2024-07-15 23:31:22.210991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.209 23:31:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.209 23:31:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:07.209 23:31:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:07.209 23:31:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:07.209 23:31:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:07.209 23:31:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.772 23:31:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:07.772 23:31:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:07.772 [2024-07-15 23:31:23.033619] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:08.038 nvme0n1 00:28:08.038 23:31:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:08.038 23:31:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:08.038 23:31:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:08.038 23:31:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:08.038 23:31:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.038 23:31:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:08.304 23:31:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.304 23:31:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.304 23:31:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@25 -- # sn=798321683 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 798321683 == \7\9\8\3\2\1\6\8\3 ]] 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 798321683 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:08.304 23:31:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.561 Running I/O for 1 seconds... 00:28:09.494 00:28:09.494 Latency(us) 00:28:09.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.494 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:09.494 nvme0n1 : 1.01 7183.57 28.06 0.00 0.00 17715.68 9757.58 30292.20 00:28:09.494 =================================================================================================================== 00:28:09.494 Total : 7183.57 28.06 0.00 0.00 17715.68 9757.58 30292.20 00:28:09.494 0 00:28:09.494 23:31:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:09.494 23:31:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:09.752 23:31:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:09.752 23:31:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:09.752 23:31:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:09.752 23:31:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:09.752 23:31:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:09.752 23:31:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:10.009 23:31:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:10.009 23:31:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:10.009 23:31:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:10.009 23:31:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.009 23:31:25 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:10.010 23:31:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:10.268 [2024-07-15 23:31:25.481046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:10.268 [2024-07-15 23:31:25.481565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66820 (107): Transport endpoint is not connected 00:28:10.268 [2024-07-15 23:31:25.482557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66820 (9): Bad file descriptor 00:28:10.268 [2024-07-15 23:31:25.483556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.268 [2024-07-15 23:31:25.483580] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:10.268 [2024-07-15 23:31:25.483596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.268 request: 00:28:10.268 { 00:28:10.268 "name": "nvme0", 00:28:10.268 "trtype": "tcp", 00:28:10.268 "traddr": "127.0.0.1", 00:28:10.268 "adrfam": "ipv4", 00:28:10.268 "trsvcid": "4420", 00:28:10.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.268 "prchk_reftag": false, 00:28:10.268 "prchk_guard": false, 00:28:10.268 "hdgst": false, 00:28:10.268 "ddgst": false, 00:28:10.268 "psk": ":spdk-test:key1", 00:28:10.268 "method": "bdev_nvme_attach_controller", 00:28:10.268 "req_id": 1 00:28:10.268 } 00:28:10.268 Got JSON-RPC error response 00:28:10.268 response: 00:28:10.268 { 00:28:10.268 "code": -5, 00:28:10.268 "message": "Input/output error" 00:28:10.268 } 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@33 -- # sn=798321683 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 798321683 00:28:10.268 1 links removed 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@33 -- # sn=370435662 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 370435662 00:28:10.268 1 links removed 00:28:10.268 23:31:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2475351 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2475351 ']' 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2475351 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2475351 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2475351' 00:28:10.268 killing process with pid 2475351 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 2475351 00:28:10.268 Received shutdown signal, test time was about 1.000000 seconds 00:28:10.268 00:28:10.268 Latency(us) 00:28:10.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.268 =================================================================================================================== 00:28:10.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.268 23:31:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 2475351 00:28:10.526 23:31:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2475214 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2475214 ']' 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2475214 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2475214 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2475214' 00:28:10.526 killing process with pid 2475214 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 2475214 00:28:10.526 23:31:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 2475214 00:28:11.088 00:28:11.088 real 0m5.489s 00:28:11.088 user 0m10.030s 00:28:11.088 sys 0m1.652s 00:28:11.088 23:31:26 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.088 23:31:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:11.088 ************************************ 00:28:11.088 END TEST keyring_linux 00:28:11.088 ************************************ 00:28:11.089 23:31:26 -- common/autotest_common.sh@1142 -- # return 0 00:28:11.089 23:31:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:11.089 23:31:26 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:11.089 23:31:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:11.089 23:31:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:11.089 23:31:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:11.089 23:31:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:11.089 23:31:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:11.089 23:31:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.089 23:31:26 -- common/autotest_common.sh@10 -- # set +x 00:28:11.089 23:31:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:11.089 23:31:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:11.089 23:31:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:11.089 23:31:26 -- common/autotest_common.sh@10 -- # set +x 00:28:13.007 INFO: APP EXITING 00:28:13.007 INFO: killing all VMs 00:28:13.007 INFO: killing vhost app 00:28:13.007 INFO: EXIT DONE 00:28:13.938 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:28:13.938 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:13.938 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:13.938 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:13.938 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:13.938 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:14.196 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:14.196 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:14.196 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:14.196 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:14.196 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:14.196 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:14.196 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:14.196 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:14.196 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:14.196 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:14.196 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:15.570 Cleaning 00:28:15.570 Removing: /var/run/dpdk/spdk0/config 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:15.570 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:15.570 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:15.570 Removing: /var/run/dpdk/spdk1/config 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:15.570 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:15.570 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:15.570 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:15.570 Removing: /var/run/dpdk/spdk2/config 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:15.570 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:15.570 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:15.570 Removing: /var/run/dpdk/spdk3/config 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:15.570 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:15.570 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:15.570 Removing: /var/run/dpdk/spdk4/config 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:15.570 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:15.570 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:15.570 Removing: /dev/shm/bdev_svc_trace.1 00:28:15.570 Removing: /dev/shm/nvmf_trace.0 00:28:15.570 Removing: /dev/shm/spdk_tgt_trace.pid2212009 00:28:15.570 Removing: /var/run/dpdk/spdk0 00:28:15.570 Removing: /var/run/dpdk/spdk1 00:28:15.570 Removing: /var/run/dpdk/spdk2 00:28:15.570 Removing: /var/run/dpdk/spdk3 00:28:15.570 Removing: /var/run/dpdk/spdk4 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2210461 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2211199 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2212009 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2212450 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2213138 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2213278 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2213996 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2214122 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2214372 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2215559 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2216469 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2216791 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2216976 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2217182 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2217371 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2217536 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2217806 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2217990 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2218182 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2220599 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2220827 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2220999 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2221008 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2221431 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2221448 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2221877 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2221900 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2222167 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2222182 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2222344 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2222474 00:28:15.570 Removing: /var/run/dpdk/spdk_pid2222853 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223019 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223315 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223485 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223521 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223702 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2223858 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2224022 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2224290 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2224449 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2224610 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2224882 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2225035 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2225203 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2225471 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2225630 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2225818 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2226067 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2226222 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2226474 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2226655 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2226814 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2227093 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2227253 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2227408 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2227688 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2227758 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2228148 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2230284 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2257084 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2259843 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2266584 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2270508 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2272872 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2273320 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2277393 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2281392 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2281394 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2282057 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2282595 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283252 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283648 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283662 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283918 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283930 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2283990 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2284598 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2285255 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2285909 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2286310 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2286320 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2286459 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2287477 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2288216 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2293717 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2293987 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2296510 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2300879 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2303024 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2309325 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2314683 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2315873 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2316541 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2326835 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2329023 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2354638 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2357566 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2358740 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2360673 00:28:15.571 Removing: /var/run/dpdk/spdk_pid2360817 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2360948 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2360975 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2361527 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2362767 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2363579 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2363894 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2365594 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2366054 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2366569 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2369033 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2375218 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2377996 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2381911 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2382984 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2384077 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2386644 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2389017 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2393433 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2393450 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2396915 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2397062 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2397198 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2397467 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2397472 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2400246 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2400575 00:28:15.828 Removing: /var/run/dpdk/spdk_pid2403251 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2405177 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2408534 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2412001 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2418572 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2422987 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2422989 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2436384 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2436801 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2437230 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2437733 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2438315 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2438719 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2439130 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2439591 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2442179 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2442323 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2446129 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2446303 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2447907 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2452973 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2452978 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2455951 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2457315 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2458833 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2459572 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2461092 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2462475 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2467817 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2468171 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2468561 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2470122 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2470467 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2470804 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2473252 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2473263 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2474730 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2475214 00:28:15.829 Removing: /var/run/dpdk/spdk_pid2475351 00:28:15.829 Clean 00:28:15.829 23:31:31 -- common/autotest_common.sh@1451 -- # return 0 00:28:15.829 23:31:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:15.829 23:31:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.829 23:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:15.829 23:31:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:15.829 23:31:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.829 23:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:15.829 23:31:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:15.829 23:31:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:15.829 23:31:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:15.829 23:31:31 -- spdk/autotest.sh@391 -- # hash lcov 00:28:15.829 23:31:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:15.829 23:31:31 -- spdk/autotest.sh@393 -- # hostname 00:28:15.829 23:31:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:16.087 geninfo: WARNING: invalid characters removed from testname! 00:28:48.182 23:31:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:48.182 23:32:02 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:50.085 23:32:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:53.401 23:32:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:55.939 23:32:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:59.229 23:32:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:01.781 23:32:16 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:01.781 23:32:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.781 23:32:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:01.781 23:32:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.781 23:32:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.781 23:32:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.781 23:32:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.781 23:32:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.781 23:32:16 -- paths/export.sh@5 -- $ export PATH 00:29:01.781 23:32:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.781 23:32:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:01.781 23:32:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:01.781 23:32:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721079136.XXXXXX 00:29:01.781 23:32:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721079136.dvWfBD 00:29:01.781 23:32:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:01.781 23:32:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:01.781 23:32:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:01.781 23:32:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:01.781 23:32:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:01.781 23:32:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:01.781 23:32:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:01.781 23:32:16 -- common/autotest_common.sh@10 -- $ set +x 00:29:01.781 23:32:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:01.781 23:32:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:01.781 23:32:16 -- pm/common@17 -- $ local monitor 00:29:01.781 23:32:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.781 23:32:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.781 23:32:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.782 23:32:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.782 23:32:16 -- pm/common@21 -- $ date +%s 00:29:01.782 23:32:16 -- pm/common@21 -- $ date +%s 00:29:01.782 23:32:16 -- pm/common@25 -- $ sleep 1 00:29:01.782 23:32:16 -- pm/common@21 -- $ date +%s 00:29:01.782 23:32:16 -- pm/common@21 -- $ date +%s 00:29:01.782 23:32:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721079136 00:29:01.782 23:32:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721079136 00:29:01.782 23:32:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721079136 00:29:01.782 23:32:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721079136 00:29:01.782 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721079136_collect-vmstat.pm.log 00:29:01.782 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721079136_collect-cpu-temp.pm.log 00:29:01.782 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721079136_collect-cpu-load.pm.log 00:29:01.782 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721079136_collect-bmc-pm.bmc.pm.log 00:29:02.719 23:32:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:02.719 23:32:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:02.719 23:32:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:02.719 23:32:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:02.719 23:32:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:02.719 23:32:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:02.719 23:32:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:02.719 23:32:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:02.719 23:32:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:02.719 23:32:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:02.719 23:32:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:02.719 23:32:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:02.719 23:32:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:02.719 23:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.719 23:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:02.719 23:32:17 -- pm/common@44 -- $ pid=2485038 00:29:02.719 23:32:17 -- pm/common@50 -- $ kill -TERM 2485038 00:29:02.719 23:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.719 23:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:02.719 23:32:17 -- pm/common@44 -- $ pid=2485039 00:29:02.719 23:32:17 -- pm/common@50 -- $ kill -TERM 2485039 00:29:02.719 23:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.719 23:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:02.719 23:32:17 -- pm/common@44 -- $ pid=2485042 00:29:02.719 23:32:17 -- pm/common@50 -- $ kill -TERM 2485042 00:29:02.719 23:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.719 23:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:02.719 23:32:17 -- pm/common@44 -- $ pid=2485070 00:29:02.719 23:32:17 -- pm/common@50 -- $ sudo -E kill -TERM 2485070 00:29:02.719 + [[ -n 2126533 ]] 00:29:02.719 + sudo kill 2126533 00:29:02.729 [Pipeline] } 00:29:02.748 [Pipeline] // stage 00:29:02.755 [Pipeline] } 00:29:02.774 [Pipeline] // timeout 00:29:02.780 [Pipeline] } 00:29:02.798 [Pipeline] // catchError 00:29:02.803 [Pipeline] } 00:29:02.822 [Pipeline] // wrap 00:29:02.829 [Pipeline] } 00:29:02.846 [Pipeline] // catchError 00:29:02.856 [Pipeline] stage 00:29:02.858 [Pipeline] { (Epilogue) 00:29:02.874 [Pipeline] catchError 00:29:02.876 [Pipeline] { 00:29:02.893 [Pipeline] echo 00:29:02.895 Cleanup processes 00:29:02.902 [Pipeline] sh 00:29:03.186 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:03.186 2485168 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:03.186 2485303 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:03.202 [Pipeline] sh 00:29:03.485 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:03.485 ++ grep -v 'sudo pgrep' 00:29:03.485 ++ awk '{print $1}' 00:29:03.485 + sudo kill -9 2485168 00:29:03.498 [Pipeline] sh 00:29:03.783 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:11.898 [Pipeline] sh 00:29:12.184 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:12.184 Artifacts sizes are good 00:29:12.199 [Pipeline] archiveArtifacts 00:29:12.207 Archiving artifacts 00:29:12.437 [Pipeline] sh 00:29:12.721 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:12.739 [Pipeline] cleanWs 00:29:12.756 [WS-CLEANUP] Deleting project workspace... 00:29:12.756 [WS-CLEANUP] Deferred wipeout is used... 00:29:12.788 [WS-CLEANUP] done 00:29:12.790 [Pipeline] } 00:29:12.812 [Pipeline] // catchError 00:29:12.826 [Pipeline] sh 00:29:13.106 + logger -p user.info -t JENKINS-CI 00:29:13.115 [Pipeline] } 00:29:13.133 [Pipeline] // stage 00:29:13.139 [Pipeline] } 00:29:13.156 [Pipeline] // node 00:29:13.163 [Pipeline] End of Pipeline 00:29:13.198 Finished: SUCCESS